5 tips to improve your AI Image creation prompts

Yeah i lied, it’s actually 6 things you’re doing wrong.. My bad.

Since y’all probably have used Midjourney V5 in the past 6 months or so, i wanted to share a couple of tips and tricks to improve your Prompts. This works for most AI image generators such als Dall-E, Midjourney, Gencraft, etc.

What are prompts?

I’m pretty certain that 80% of the english speaking IT folks know this, but i’ll do it for the 80% of german IT engineers who have no fucking clue.

Prompts are the thing you tell the AI to do. You prompt it to draw an image of “gay pirate wearing a nazi pyjama”, and it will do so. The prompt is your Input for most of the current Transformer based models.

Tip 1 - make your prompts shorter

Having ver long prompts doesn’t help. It’s not like horsepower where more is better, it’s about being precise and concise. Almost like talking to your girlfriend. The more you talk, the deeper the trouble gets. Just stay on point.

Tip 2 - not providing enough constraints

Same thing, if you don’t give your girlfriend any constraints, then god knows, she’ll start buying import beer or Rosé. Same goes for AI image generators. Open ended prompts without guidance will produce random results, nothing you can repeat or use. For example “Spaceship” or “Airplane” might be too general, try something like “an American Airlines Airplane” or “a futuristic spaceship, star trek style, entering a wormhole”.

Tip 3 - ambiguous language

You need to be very clear, or your Girlfriend^WAI will have issues. Avoid pronouns and implied subjects that can confuse the Girlfriend^WAI.

Tip 4 - abstract concepts

Just like your girlfriend, abstract concepts make the AI’s head hurt. It has no idea about emotions, so instead of having your a AI prompt be like “Painting of a happy female” be “Painting of a smiling female”. AI will the know what smiling is, but what emotion/abstract concept lies behind “happy” might be going too far.

Tip 5 - being impatient

So the first result that Dall-E or Midjourney yielded isn’t perfect. You’re giving up of course… But nobody told you that it can take several attempts, and especially learning, for how to craft your prompts correctly.

Depending on the model your AI has been trained on, you will need to play around to become proficient. Have you ever heard of the 8 year old car mechanic? Exactly.

Tip 6 - use ChatGPT to create your ideal prompts

Now here comes the kicker. Instead of retraining your brain to write these stupid prompts, you could also just ask ChatGPT (gpt-4) to write them for you.

Priming ChatGPT

1
You are an AI model developed by OpenAI, and I’m introducing you to Midjourney, an AI tool that generates images based on text prompts. Your task is to assist me in creating these prompts. Are you ready?

Examples

You should pass ChatGPT some examples of good prompts, for it to know what you’re looking for.

1
2
3
4
Some examples of really good prompts are:
- “A serene mountain landscape covered in a blanket of snow, rendered in a minimalist style, capturing the subtle shades of white and blue under the soft morning light.”
- “A futuristic cityscape at night, illuminated by neon lights and towering skyscrapers, depicted in a cyberpunk art style with a strong emphasis on the contrast between the dark shadows and vibrant glow.”
- “A bustling market scene in a bustling Asian city, portrayed with a lively and vibrant palette, highlighting the intricate patterns of textiles and the busy atmosphere with dynamic brushstrokes.”

For instance, you can ask:

1
Can you generate a prompt for an image of a beach under a starry night sky?

Or you can phrase your query like this:

1
Title: “A beach under a starry night sky.”

ChatGPT will then create a prompt based on the provided concept:

1
Prompt: “A tranquil beach scene beneath a studded night sky, rendered in an impressionistic style. The sands are soft and glistening, reflecting the brilliant stars above. The gentle waves ebb and flow with a mesmerizing rhythm, the whitecaps gleaming under the stellar glow. The calm serenity of the ocean and the enigmatic vastness of the starry night sky paint a breathtaking tableau.”

Switching to hexo.io

After trying to upgrade my Jekyll version from two years ago, and failing miserably, i decided to look for alternatives.

Meet Hexo.io

What is it?

The entire migration took me (incl. overdoing some pages and adding new ones) about 3 hours.

Since i am running nixos on my Macs, it’s easily doable with a shell.nix.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
{ pkgs ? import <nixpkgs> {} }:

let
lib = import <nixpkgs/lib>;
NODE_MODULES_PREFIX = toString ./node_modules;

in pkgs.mkShell {
packages = with pkgs; [
nodejs
nodePackages.npm
];

inherit NODE_MODULES_PREFIX;

shellHook = ''
export PATH="$PATH:$NODE_MODULES_PREFIX/hexo-cli/bin"
npm install
'';
}

Now i can just cd into my repo and run nix-shell and have hexo ready in my path, ready to make new posts!

OpenSMTPD and Dovecot with a shared PostgreSQL, Sieve and RSpamd on OpenBSD 6.6

I finally got around to setting up a new mailserver and i decided to give OpenSMTPD a try. It wasn’t a natural birth, i can tell you that. The switching of the configuration syntax makes for a lot of outdated Google Search results.

So what are we going to setup. Well the title gave it away i guess, so for the slow ones amongst you: we are building a Mailserver with OpenSMTPD, Dovecot, RSpamd and Sieve. The OpenSMTPD and the Dovecot will both be using the same authentication table and hashing scheme, making this a nifty solution.

Installing the required components

1
pkg_add postgresql-server opensmtpd-extras opensmtpd-extras-pgsql opensmtpd-filter-rspamd opensmtpd-filter-senderscore rspamd dovecot dovecot-pigeonhole dovecot-postgresql redis

Enabling them on boot

1
2
3
4
5
6
7
8
rcctl enable httpd
rcctl enable smtpd
rcctl enable postgresql
rcctl enable rspamd
rcctl enable dovecot
rcctl start dovecot
rcctl enable redis
rcctl start redis

Setting up DNS

This has been explained in numerous posts on the Internet, you should by now know how to setup an MX Record (maybe SPF and DKIM).

Setting up Let’s Encrypt SSL Certificates

/etc/httpd.conf

Configure httpd to do the acme challenges.

1
2
3
4
5
6
7
8
9
10
server "replace.with.host.name" {
listen on * port 80
location "/.well-known/acme-challenge/*" {
root "/acme"
request strip 2
}
location "/" {
block return 301 "https://$SERVER_NAME$REQUEST_URI"
}
}

And then start httpd:

1
rcctl start httpd

/etc/acme-client.conf

Now we go on to configure the acme-client.

1
2
3
4
5
6
7
8
9
10
11
12
13
api_url="https://acme-v02.api.letsencrypt.org/directory"
authority letsencrypt {
api url $api_url
account key "/etc/acme/letsencrypt-privkey.pem"
}

domain replace.with.host.name {
#alternative names { www.replace.with.host.name }
domain key "/etc/ssl/private/replace.with.host.name.key"
#domain certificate "/etc/ssl/replace.with.host.name.crt"
domain full chain certificate "/etc/ssl/replace.with.host.name.crt"
sign with letsencrypt
}

Obtaining a certificate

1
acme-client -v replace.with.host.name

Adding certificate renewal to cron

Enter the crontab with crontab -e and add the following line:

1
30      0       *       *       *       /usr/sbin/acme-client replace.with.host.name && /usr/sbin/rcctl restart smtpd && /usr/sbin/rcctl restart dovecot

Preparations for our services

/etc/login.conf

Go ahead and add the following lines at the end of your /etc/login.conf:

1
2
3
4
5
6
7
8
dovecot:\
:openfiles-cur=1024:\
:openfiles-max=4096:\
:tc=daemon:

postgresql:\
:openfiles=768:\
:tc=daemon:

Once done, have the file cap_mkdb’d like this:

1
cap_mkdb /etc/login.conf

/etc/sysctl.conf

Append the following values to /etc/sysctl.conf so PostgreSQL has a bit of breathing room:

1
2
kern.seminfo.semmni=60
kern.seminfo.semmns=1024

Then go on to actually setting them in the kernel:

1
sysctl -w kern.seminfo.semmni=60 kern.seminfo.semmns=1024

Adding a vmail user and group

1
useradd -m -d /var/vmail -s /sbin/nologin vmail

Preparing PostgreSQL

1
2
3
4
5
su - _postgresql
mkdir /var/postgresql/data
initdb -D /var/postgresql/data -U postgres -A scram-sha-256 -E UTF8 -W
exit
rcctl start postgresql

Next we are going to add a user, a database, two tables and three views:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
psql -Upostgres <<EOF
CREATE USER mail WITH ENCRYPTED PASSWORD 'your.mail.password';
CREATE DATABASE mail OWNER mail;
EOF

psql -Umail mail <<EOF

-- this is the table for the users accounts
CREATE TABLE public.accounts (
id serial,
email character varying(255) DEFAULT ''::character varying NOT NULL,
password character varying(255) DEFAULT ''::character varying NOT NULL,
active boolean DEFAULT true NOT NULL
);

-- this is the table for the virtual mappings for email -> email
CREATE TABLE public.virtuals (
id serial,
email character varying(255) DEFAULT ''::character varying NOT NULL,
destination character varying(255) DEFAULT ''::character varying NOT NULL
);

-- this view is used to determine where to deliver things
CREATE VIEW public.delivery AS
SELECT virtuals.email,
virtuals.destination
FROM public.virtuals
WHERE (length((virtuals.email)::text) > 0)
UNION
SELECT accounts.email,
'vmail'::character varying AS destination
FROM public.accounts
WHERE (length((accounts.email)::text) > 0);

-- this view is used to determine which domains this server is serving
CREATE VIEW public.domains AS
SELECT split_part((virtuals.email)::text, '@'::text, 2) AS domain
FROM public.virtuals
WHERE (length((virtuals.email)::text) > 0)
GROUP BY (split_part((virtuals.email)::text, '@'::text, 2))
UNION
SELECT split_part((accounts.email)::text, '@'::text, 2) AS domain
FROM public.accounts
WHERE (length((accounts.email)::text) > 0)
GROUP BY (split_part((accounts.email)::text, '@'::text, 2));

-- this view should control the email addresses users can send with
CREATE VIEW public.sending AS
SELECT virtuals.email,
virtuals.destination AS login
FROM public.virtuals
WHERE (length((virtuals.email)::text) > 0)
UNION
SELECT accounts.email,
accounts.email AS login
FROM public.accounts
WHERE (length((accounts.email)::text) > 0);
EOF

/etc/mail/postgres.conf

Next we configure the PostgreSQL lookups for smtpd:

1
2
3
4
5
conninfo host='localhost' user='mail' password='your.mail.password' dbname='mail'
query_alias SELECT "destination" FROM delivery WHERE "email"=$1;
query_credentials SELECT "email", "password" FROM accounts WHERE "email"=$1;
query_domain SELECT "domain" FROM domains WHERE "domain"=$1;
query_mailaddrmap SELECT "email" FROM sending WHERE "login"=$1;

Also since this file contains the password to the database, only _smtp should be able to read it:

1
2
chown _smtpd:_smtpd /etc/mail/postgres.conf
chmod o= /etc/mail/postgres.conf

/etc/mail/smtpd.conf

Now we can go ahead and configure OpenSMTPD:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
table aliases file:/etc/mail/aliases
table auths postgres:/etc/mail/postgres.conf
table domains postgres:/etc/mail/postgres.conf
table virtuals postgres:/etc/mail/postgres.conf
table sendermap postgres:/etc/mail/postgres.conf


pki replace.with.host.name cert "/etc/ssl/replace.with.host.name.crt"
pki replace.with.host.name key "/etc/ssl/private/replace.with.host.name.key"


filter check_dyndns phase connect match rdns regex { '.*\.dyn\..*', '.*\.dsl\..*' } \
disconnect "550 no residential connections"

filter check_rdns phase connect match !rdns \
disconnect "550 no rDNS is so 80s"

filter check_fcrdns phase connect match !fcrdns \
disconnect "550 no FCrDNS is so 80s"

filter senderscore \
proc-exec "filter-senderscore -blockBelow 10 -junkBelow 70 -slowFactor 5000"

filter rspamd proc-exec "filter-rspamd"


listen on all tls pki replace.with.host.name filter { check_dyndns, check_rdns, check_fcrdns, senderscore, rspamd }
#listen on all port smtps smtps pki replace.with.host.name auth <auths> senders <sendermap> masquerade
#listen on all port submission tls-require pki replace.with.host.name auth <auths> senders <sendermap> masquerade
listen on all port smtps smtps pki replace.with.host.name auth <auths>
listen on all port submission tls-require pki replace.with.host.name auth <auths>

action "receive_aliases" lmtp "/var/dovecot/lmtp" rcpt-to alias <aliases>
match from local for local action "receive_aliases"

action "receive_vmail" lmtp "/var/dovecot/lmtp" rcpt-to virtual <virtuals>
match from any for domain <domains> action "receive_vmail"

action "outbound" relay helo replace.with.host.name
match from auth for any action "outbound"

And finally start the smtpd:

1
rcctl start smtpd

/etc/rspamd/worker-proxy.inc

In this file i actually just changed the spam_header to X-Spam-Status, but this optional.

/etc/rspamd/local.d/greylist.conf

1
2
3
4
greylist {
servers = "127.0.0.1:6379";
timeout = 1min;
}

Then we start rspamd:

1
rcctl start rspamd

Diff style below here

I’ve chosen to only put in things you need to change or append, everything else should remain as is.

Why did i do this? Well since dovecot has evolved into this nice configuration-file layout, i decided that this is the most efficient way to keep this document clean and relevant.

/etc/dovecot/conf.d/10-auth.conf

Towards the beginning of the file, disable plaintext authentication:

1
disable_plaintext_auth = yes

Then at the end of the file, there are several includes. We are going to comment the auth-system.conf.ext and are going to comment in the auth-sql.conf.ext instead:

1
2
#!include auth-system.conf.ext
!include auth-sql.conf.ext

/etc/dovecot/conf.d/10-mail.conf

We change the mailbox location to our vmail directory.

1
mail_location = maildir:/var/vmail/%u

/etc/dovecot/conf.d/10-master.conf

Next we are going to comment in our SSL listeners, feel free to leave in 143 and 110 as they are using STARTTLS:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
service imap-login {
inet_listener imap {
port = 143
}
inet_listener imaps {
port = 993
ssl = yes
}
}

service pop3-login {
inet_listener pop3 {
port = 110
}
inet_listener pop3s {
port = 995
ssl = yes
}
}

Then make sure lmtp is configured with the correct permissions:

1
2
3
4
5
6
7
service lmtp {
unix_listener lmtp {
mode = 0660
user = vmail
group = vmail
}
}

/etc/dovecot/conf.d/10-ssl.conf

Now we are going to configure SSL:

1
2
3
4
5
6
ssl = required

ssl_cert = </etc/ssl/replace.with.host.name.crt
ssl_key = </etc/ssl/private/replace.with.host.name.key

ssl_prefer_server_ciphers = yes

/etc/dovecot/conf.d/15-lda.conf

Next up is our local delivery agent (LDA):

1
2
3
4
postmaster_address = postmaster@your.domain
hostname = replace.with.host.name

lda_mailbox_autocreate = yes

/etc/dovecot/conf.d/20-lmtp.conf

Now we configure our LMTP for Sieve:

1
2
3
4
protocol lmtp {
# Space separated list of plugins to load (default is global mail_plugins).
mail_plugins = $mail_plugins sieve
}

/etc/dovecot/conf.d/90-plugin.conf

Here we configure the Sieve plugin itself:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
plugin {
sieve_plugins = sieve_imapsieve sieve_extprograms
sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.environment

imapsieve_mailbox1_name = Junk
imapsieve_mailbox1_causes = COPY APPEND
imapsieve_mailbox1_before = file:/usr/local/lib/dovecot/sieve/report-spam.sieve

imapsieve_mailbox2_name = *
imapsieve_mailbox2_from = Junk
imapsieve_mailbox2_causes = COPY
imapsieve_mailbox2_before = file:/usr/local/lib/dovecot/sieve/report-ham.sieve

imapsieve_mailbox3_name = Inbox
imapsieve_mailbox3_causes = APPEND
imapsieve_mailbox3_before = file:/usr/local/lib/dovecot/sieve/report-ham.sieve

sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve
}

/usr/local/lib/dovecot/sieve/report-spam.sieve

Here we are going to fill in the default action for when we move files to spam folder. In our case we learn them as spam:

1
2
3
4
5
6
7
require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables"];

if environment :matches "imap.user" "*" {
set "username" "${1}";
}

pipe :copy "sa-learn-spam.sh" [ "${username}" ];

/usr/local/lib/dovecot/sieve/sa-learn-spam.sh

Fill the file with the following contets:

1
2
#!/bin/sh
exec /usr/local/bin/rspamc -d "${1}" learn_spam

/usr/local/lib/dovecot/sieve/report-ham.sieve

Here we are going to fill in the default action for when we move files out of the spam folder. In our case we learn them as ham:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables"];

if environment :matches "imap.mailbox" "*" {
set "mailbox" "${1}";
}

if string "${mailbox}" "Trash" {
stop;
}

if environment :matches "imap.user" "*" {
set "username" "${1}";
}

pipe :copy "sa-learn-ham.sh" [ "${username}" ];

/usr/local/lib/dovecot/sieve/sa-learn-ham.sh

Fill the file with the following contets:

1
2
#!/bin/sh
exec /usr/local/bin/rspamc -d "${1}" learn_ham

making them both executable

To make them runnable by the system, we make them executable:

1
chmod +x /usr/local/lib/dovecot/sieve/*.sh

/etc/dovecot/conf.d/90-sieve.conf

And here we configure the sieve folder that gets used for everyone.

1
sieve_before = /var/vmail/sieve/

/var/vmail/sieve/junk.sieve

First create that folder:

1
2
mkdir -p /var/vmail/sieve
chown -R vmail:vmail /var/vmail

Then we sieve away the spam:

1
2
3
4
5
require "fileinto";
if header :contains "X-Spam-Status" "YES" {
fileinto "Junk";
stop;
}

/etc/dovecot/conf.d/auth-sql.conf.ext

Here we configure our override fields, so we don’t have to do an ugly select:

1
2
3
4
5
userdb {
driver = sql
args = /etc/dovecot/dovecot-sql.conf.ext
override_fields = uid=vmail gid=vmail home=/var/vmail/%u
}

/etc/dovecot/dovecot-sql.conf.ext

Now are almost done, we now configure how dovecot accesses the database. Append this to the end:

1
2
3
4
5
6
driver = pgsql
connect = dbname=mail user=mail password=your.mail.password
default_pass_scheme = CRYPT

password_query = SELECT email AS user, '{CRYPT}' || password AS password FROM accounts WHERE active = true AND email = '%u' AND email != '' AND password != ''
user_query = SELECT email FROM delivery WHERE email = LOWER('%u')

/etc/dovecot/dovecot.conf

Last but not least we update the protocols we are going to use:

1
protocols = imap pop3 lmtp

And finally start the dovecot:

1
rcctl start dovecot

Adding Accounts and Aliases

To generate securely hashed passwords, you can use “smtpctl encrypt” and then enter your password. The resulting hash can be used as replacement for PASSWORD:

1
2
INSERT INTO accounts (email,password) VALUES ('my@first.email.address','PASSWORD');
INSERT INTO virtuals (email,destination) VALUES ('my@second.mail.address','my@first.email.address');

That’s it

You should now be able to use this setup as expected.

If you find any errors, you can find me on Twitter and let me know!

Call Of Duty: Warzone - My dirty tricks

Here is my list of dirty little secrets that make the game more fun and yield more kills.

Sending Players to the Gulag before they hit the ground

  • while parachuting with other players beside you, cut your parachute and start shooting at them with your weapon.
  • skydive to a helicopter fast, get in and fly into slow players with parachutes effectively sending them to the Gulag before they hit the ground.

Effectively using the time in the Gulag, when watching from above

  • spray as many your opponents as you can to make them better visible during the fight
  • if you see a player down there placing a Claymore Mine, try throwing a rock at the mine to set it off into his face
  • if a player down there gets hit and isn’t done yet, you can try to kill him by throwing rocks at his head

On the Battlefield

  • place a Trophy system on your Jeep or into your Helicopter for that automatic defense system
  • when three people on the same team call for a drone at the same time, your team will get an advance drone for the remainder of the
    game showing everyone on the map

if you have more of these

Just go ahead and contact me on Twitter and i will add your tips with credits here.

Now go out and use it wisely. ;)

cheers

PostgreSQL custom sorting made easy

Every developer knows the pain of sorting database rows by some custom field. The easiest Example of this are DNS Records.

Today i’ve come across a solution using array_position. Basically you can pass it the Order of elements you want on top, and the rest will be below.

It works like this

1
2
SELECT hostname, type, contentn FROM records AS r
ORDER BY type, array_position(array["SOA"::varchar, NS"::varchar], r.type)

Now all records having type SOA will be first, type NS second and type A third. Everything else will come after.

The important part

Make sure you typecast your custom array into the type of the column you are ordering against.
In my case the type column is a varchar, that’s why i am casting all elements to it.

I hope this helps some of you avoiding to write custom sorting in code.

Switching to Jekyll

For almost 8 years i have ran this blog on my own blog engine, println. While at the beginning it gave me all i needed, in recent years my behavior has changed, and thus my preferences. I don’t like editing blog-posts in the browser anymore. I got even more used to vi (neovim specifically) than i was before. So it was time for a change.

After a bit of exporting my old PostgreSQL database for my blog into Jekyll format, i am happy to present the result. I hope i got all the permalinks right, but i guess i will find out.

I really hope that this motivates me to write more frequently in the future again.

cheers

Using Juniper JunOS apply-groups for IXPs (like AMS-IX or DECIX)

So recently i’ve been cleaning out configurations on our network equipment, in order to get rid of technical debt. Two of these missions were simplifying our Switch and Router Configurations. This has been on my todo-list forever, but i hardly ever got to researching it.

The Problem

If you’re either operating JunOS Switches or Routers, you probably have come across a lot of duplicate configuration. Imagine a client (let’s call him “Acme Corp”) has 2 Switchports on one of your EX Series switches configured. Usually this would look something like this:

ge-0/0/0 {
    description "Acme Corp - Server 1 - Port 0";
    unit 0 {
        family ethernet-switching {
            port-mode trunk;
            vlan {
                members public, acme-private;
            }
        }
    }
}
ge-0/0/1 {
    description "Acme Corp - Server 1 - Port 1";
    unit 0 {
        family ethernet-switching {
            port-mode trunk;
            vlan {
                members public, acme-private;
            }
        }
    }
}

There is nothing wrong with that, but this gets you a lot of configuration lines very fast, which makes it a little hard to maintain in my opinion.

Same goes for BGP peers, your configuration for AMS-IX peers will repeat itself over and over again.

group amsix-v4-rs {
    type external;
    description "AMS-IX IPv4 Route Servers";
    local-preference 200;
    import peer-in;
    family inet {
        unicast;
    }
    export peer-out;
    remove-private;
    peer-as 6777;
    neighbor 80.249.208.255;
    neighbor 80.249.209.0;
}
group amsix-v6-rs {
    type external;
    description "AMS-IX IPv6 Route Servers";
    local-preference 200;
    import peer-in;
    family inet6 {
        unicast;
    }
    export peer-out;
    remove-private;
    peer-as 6777;
    neighbor 2001:7f8:1::a500:6777:1 {
        description rs1.ams-ix.net;
    }
    neighbor 2001:7f8:1::a500:6777:2 {
        description rs2.ams-ix.net;
    }
}

Here again, lots of configuration repeating itself (apart from these two being v4 and v6 mixed). But overall, lots of stuff gets repeated for BGP peers over and over again, which makes changes to policies a tedious task, where you have to update every single BGP peer.

How to do it right cleanly then?

I’m guessing (by the fact that you visited this blog post), that apply-groups are a new thing to you, so i’m gonna explain it a bit in a dummy way, probably here and there things that could be better, but this works exceptionally well for me.

How would the Switch config look like, with apply-groups?

First we would set the apply groups:

groups {
    ACME-SERVER {
        interfaces {
            <*> {
                description "Acme Corp Server Interface";
                unit 0 {
                    family ethernet-switching {
                        port-mode trunk;
                        vlan {
                          members public, acme-private;
                        }
                    }
                }
            }
        }
    }
}

then configure the interfaces

interfaces {
   ge-0/0/0 {
        description "Acme Corp - Server 1 - Port 0";
        apply-groups ACME-SERVER;
    }
    ge-0/0/1 {
        description "Acme Corp - Server 1 - Port 1";
        apply-groups ACME-SERVER;
    }
}

This makes it so much easier to tag Switchports for various types of configurations, without having to keep track of all the changes across each interface.

How would a BGP config look like?

Again we set up the apply groups:

groups {
    AMSIX-BGP-v4 {
        protocols {
            bgp {
                group <*> {
                    type external;
                    description "AMS-IX BGP Peer";
                    local-preference 200;
                    import peer-in;
                    family inet {
                        unicast;
                    }
                    export peer-out;
                    remove-private;
                }
            }
        }
    }
}

Now our BGP Peer group section looks like this:

protocols {
    bgp {
        group amsix-v4-rs {
            apply-groups AMSIX-BGP-v4;
            description "AMS-IX IPv4 Route Servers";
            peer-as 6777;
            neighbor 80.249.208.255;
            neighbor 80.249.209.0;
        }
    }
}

What we learned

You now know, how to easily manage templates on JunOS configuration sections. This knowledge also applies to all other configuration areas, as far as i know. It’s not limited to these 2 scenarios, so feel free to play around with it. :)

Thanks for reading

Simplistic Auto Provisioning for BSDs, UNIX and Linux, using just DHCP

For a few weeks now i’ve been thinking about better tools to provision our bare-metal servers and VMs. All tools out there are IMHO bloatware. Over-complicated stuff where nobody knows when the next library upstream will break feature X which will prevent shit from working. Typical wobbly constructs we have these days. I’m not a fan of them, you shouldn’t be either.

But yesterday noon i read one more of these guides to setup something which wants you to curl their installer and pipe it through bash, YIKES.

Then in my typical haze, i decided to play a little mind-game, WHEN would be a moment where this curl | bash scenario would be valid, or at least a bearable solution? Of course! A solution to my previous provisioning dilemma presented itself…

What you need

  • a HTTP Server (nginx, apache, anything that can serve a file)
  • a DHCP Server where you can define custom fields (dnsmasq, kea, isc-dhcpd, ..)
  • a DHCP Client which lets you parse custom fields (dhcpcd, isc-dhcpc. NOT Henning Brauer’s dhclient)

The quick gist

  • DHCP server sends out custom field with URL inside
  • DHCP client picks up that field, processes it in hook with curl | sh

WARNING: THIS IS POTENTIALLY DANGEROUS! THIS IS KEPT SIMPLE FOR THE SAKE OF THIS HOWTO

BETTER APPROACH: gpg sign the script (even when auto-generated) on the server side, and have the client verify the signature against the pubkey.

How to do it

First configure your DHCP Server to deliver a custom field in the reserved range (upwards of 200 i think, but check before you decide). In the Payload we just stick in an URL that can be reached from a DHCP client.

dnsmasq.conf

dhcp-option-force=254,http://192.168.0.1/bootstrap.sh

dhcpd.conf

option server-bootstrap code 254 = string;
subnet 192.168.0.0 netmask 255.255.255.0 {
    [...]
    option server-bootstrap "http://192.168.0.1/bootstrap.sh";
}

Client configuration

Next you need to slightly modify your client’s setup, i’ve only used dhcpcd for this, as FreeBSD’s and OpenBSD’s default dhclient, can’t do custom fields anymore, they all get filtered and there is no configuration for it anymore.

dhcpcd

On FreeBSD, i’ve placed a dhcpcd.enter-hook script at /usr/local/etc/dhcpcd.enter-hook

#!/bin/sh

# for security reasons, you should really check here if bootstrapping is required
# you don't want anyone pushing bad scripts that get executed by a rogue dhcp server
if [ "${new_bootstrap}" != "" ]; then
    TMP=$(mktemp)
    fetch -o ${TMP} ${new_bootstrap}
    # for more security, you might also want to gpg sign your script and have gpg verify it here
    sh ${TMP} || exit 1
fi

Last we need to modify dhcpcd.conf to request the extra field, so it gets delievered by the DHCP Server. I just added those two lines to the default:

define 254 string bootstrap
option bootstrap

bootstrap.sh hosted on the HTTP Server

This is our bootstrapping shell script. This could be anything, there could be many of these for each profile, there could also be a rendering process on the server side, whatever floats your boat. Mine is just a basic sample to get the idea across:

#!/bin/sh

echo
echo
echo "first: do some meaningful diagnosis/inventory here"
echo "  like posting dmidecode and other stuff to your remote"
echo
echo "second: if this is used to bootstrap bare metal machines booting pxe"
echo "  IMPORTANT: check for existing installations on your disk"
echo "             like is there a partitioning scheme already here?"
echo "  then you could go ahead and install whatever you want"
echo
echo "third: enroll this system into configuration management like CFengine"
echo "  like: cf-agent -B your.cf.host && cf-agent -KIC"
echo
echo "sleeping 10 seconds... then just running some wall command"
sleep 10
echo "dhcp-bootstrapping sez HELLO KITTENS!"|wall

The result

Output from running dhcpcd em0 shows that it works :)

DUID 00:01:00:01:21:6f:7f:9e:08:00:27:d7:7f:f9
em0: IAID 27:d7:7f:f9
em0: rebinding lease of 192.168.168.80
em0: leased 192.168.168.80 for 7200 seconds
em0: changing route to 192.168.168.0/24
em0: changing default route via 192.168.168.1
/tmp/tmp.BYYgx9dr                             100% of  691  B 1670 kBps 00m00s


first: do some meaningful diagnosis/inventory here
  like posting dmidecode and other stuff to your remote

second: if this is used to bootstrap bare metal machines booting pxe
  IMPORTANT: check for existing installations on your disk
             like is there a partitioning scheme already here?
  then you could go ahead and install whatever you want

third: enroll this system into configuration management like CFengine
  like: cf-agent -B your.cf.host && cf-agent -KIC

sleeping 10 seconds... then just running some wall command

Broadcast Message from root@test
        (/dev/pts/0) at 15:45 CEST...

dhcp-bootstrapping sez HELLO KITTENS!

forked to background, child pid 34507
root@test:~ #

Final thoughts

These very simple elements thrown together in the right way, make up for a very reliable and especially maintainable setup! No wiggly parts, no extra software you don’t have running anyways. Just plain old Ops-Tech put together the right way. Easy to investigate with tools you already know, easy to customize the heck out of it.

I hope this helps some of you to build better, more reliable and easier to maintain systems.

Golang is really awesome and why it beats Scala/JVM

So i learned Golang a few months back. Thanks to @normanmaurer and @MegOnWheels for the great suggestion! Not because i wanted to, but because Scala and the JVM started to suck after almost a decade.

Why did the JVM start to suck?

When i started using the JVM, i was happy that my application and it’s virtual machine/runtime would be separate parts. After 9 Years of coding nearly full-time Scala, i’ve come to hate it. Why?

Because the variance in the JVM makes it extremely hard to make predictable applications. One Version does this, the next breaks that, so from a quality coder perspective, you have to work around your runtimes issues and capabilities.

Next up in order to use the latest features like TLS SNI (which isn’t really cutting edge in the wake of TLS 1.3), you need to keep your JVM/Runtime up to date, everywhere you want to run that feature. (TLS SNI was Java7->8)

If you’re a coder with no Ops-responsibilities, this might seem acceptable to you, but i have to care about operating the code that i write, just as much as i have to care about the code itself!

So what makes golang (imho) superior?

You get a statically linked binary. No Runtime, no nothing installed.

This is especially awesome from a deployment standpoint, as you only need to take care of your binary and it’s assets (if any).

Also noteworthy, since my Scala/Java .jars (with all dependencies bundled) were rarely less than 60MB, on top of a 500MB+ JVM, that makes for a lot of wasted disk-space and things that need regular updating. My golang binaries have rarely more than 13MB, all together.

Last but not least, scala-sbt sucks donkey balls. Straight up. In my opinion, it is the single worst build tool EVER conceived by a human! Regularly breaking backward compatibility, requiring me to deal with new plugins and shit, HORRIBLE!

###I want a build tool that just builds my code and churns out a usable binary form.

Which is what the ‘go’ tool actually does. Apart from it’s super feature-richness like testing, fuzzing and all that nice stuff, it also builds code reliable and without much of a config file that i need to keep in shape! A stupid simple Makefile suffices for all my needs so far.

Also, when i needed Disk-Space previously on Scala/JVM, rm -rf ~/.ivy2 solved most of this, since all your dependency jars pulled from sbt live there. But once you do that, maybe you should look for another career, since it’s likely that some artifacts/jars might not be available anymore, breaking your build. As opposed to Golang, where i just git clone my dependency-source into my repository, add it either as submodule to git or just straight up git add the dependency code.

Scala binary incompatibility (update to original article)

A number of people pointed out, that having a binary dependency cache is almost as good as having sources.

Well ever came across multiple Scala versions? Or just been in the Scala game for too short to know Scala binary incompatibilities? Yeah, they’re fucking awesome if you love that kind of stuff. I don’t. I don’t want to hunt down all dependencies of Package X that only worked on Scala 2.9 but needed to be recompiled for your 2.10 project? Or 2.11 or whatever?

Happy fun going through that. I wish you lots of fun.

Inline bugfixing (added as well after original publication)

I don’t know about you guys, but i like to fix bugs in other people’s code that i use. Fills me with pride and makes me happy to see other people benefiting from my code.

So whenever i had to track down issues in Scala/JVM-land, my usual procedure is downloading that libraries sources. Then trying to get that developers build tool to work. Sometimes it’s sbt. Sometimes it’s ant. Sometimes maven. Sometimes something i haven’t even heard of. Awesome, right?

Now i would spend my time getting that stuff to work, then spend my time fixing the bug.

WASTE OF TIME

If i already have the sources, if i already make them compile for my current version, isn’t it a lot easier if you can just go to the line, change it, test the code?

Or would you rather go through the whole build process of that maintainer’s build tool, place the resulting .jar in your cache or deploy it however, then possibly downloading that again and having to change your build to use the new artifact?

From a simple logic perspective i’d always choose the first, as it saves me a lot of headache and lets me focus on the problem at hand.

Cross compilation

Given, this isn’t an issue on the JVM to the point where you have a working JRE for your platform. Having a fat-ass JVM running on your RaspberryPI might not be the best use of it’s CPU, again, in my Opinion.

How go deals with this? Well there is this excellent Talk from Rob Pike about go compiler internals (slides) explains to us, that since go 1.7 you don’t have to use the C barrier anymore, but can have golang just compile straight from Go to ASM. Yup, fucking dank!

So in order to cross-compile some pure go code on OSX for my RaspberryPI, i just run:

GOOS=freebsd GOARCH=arm GOARM=6 go build src/*.go

Yup, that’s it. scp that binary and be happy with it. Why not do it on the ARM itself? Well a) it takes prolly a lot longer than on my Intel i7 octo-core, b) golang on ARM is only available up to version 1.4, since there are some issues with newer versions (haven’t checked further), but cross-compiling with 1.8-HEAD works just fine.

Performance

From my first few months of using it in production i can confirm that for my use-cases (mostly network code), golang is extremely fast, even tho Zero-Copy isn’t supported on FreeBSD yet.

Memory consumption is for our applications about 1/10th of the original JVM project, thus reducing memory requirements throughout our datacenter operations, which resulted in about 6/10th of previously used JVM RAM being free’d from our FreeBSD VMs, leaving a LOT of room for new clients/applications of ours.

Conclusion

Golang is going to be my new primary language, putting Scala only in backup-mode for existing clients that need their software supported, which previously got developed by me.

More go related posts to come in 2017!

Kali on the RaspberryPi with 3.5" LCD

So i have acquired myself a cheap Chinese 3.5” LCD display with Resistive Touch from aliexpress. So far so good, but it took me nearly a month to get a working current setup.

The Problem

The chinese vendor i got it from refers to a site called waveshare.com, which is so badly connected it never loaded here. So i google-cached the Site, found a file-name LCD-show.tar.gz, which of course also didn’t load. So i set out to find the file, did so, and was baffled.

The Manufacturer provides only Linux Kernel 3.18 binary modules, no sources!

So i started checking what modules they loaded, and came onto notro’s rpi firmware. Mildly out of date, but at least there is an issue that has to do with my display and people not getting it to work, since 2014!!

After reflashing that RPI’s disk for the 40th time after soft-bricking the installation with an out-of-date RPI firmware and outdated kernel modules that panic’d the thing on boot, i found the solution.

How do i get it to work?

Well it’s fairly easy, after reading a bunch of code and googling for yet another file, i stumbled upon swkim01’s waveshare-dtoverlays GitHub repo, which makes the whole process as easy as copying the dtoverlay file into /boot/overlays/, add one line to /boot/config.txt, reboot and be done with it..

The Process

clonelink
1
cp waveshare-dtoverlays/waveshare3(2b|5a)-overlay.dtb /boot/overlays/

Then adding the following to /boot/config.txt (depending on your display and needs):

3.2” LCD’s /boot/config.txt with 270° rotation

dtoverlay=waveshare32b:rotate=270

3.5” LCD’s /boot/config.txt with 90° rotation and having XY of touch swapped

dtoverlay=waveshare35a:rotate=90,swapxy=1

Reboot

After rebooting, my display lit up in black (if driver is not loaded it is only white) but didn’t do much else. Adding the following lines to /usr/share/X11/xorg.conf.d/99-fbdev.conf (or create that file if you don’t already have it from failed attempts):

  Identifier "myfb"
  Driver "fbdev"
  Option "fbdev" "/dev/fb1"
EndSection```

Then running **FRAMEBUFFER=/dev/fb1 startx** made it launch into X for the first time. YAY


# Calibration

After i had it running, i noticed that my mouse didn't appear where it should be, so if i touched (even with the pen that came with it), the position of the Event was off, so i figured out this approach to get it working for my 270° rotated setup.

You just have to put the driver's information into Xorg config. I put this part into /usr/share/X11/xorg.conf.d/99-calibration.conf:

```Section "InputClass"
      Identifier "calibration"
      MatchProduct "ADS7846 Touchscreen"
      Option "Calibration" "3869 178 3903 270"
EndSection```

**If these do not work for you, install xinput-calibrator (apt-get install worked), and run it while having X open. It will yield at the end a configuration for you to put into the 99-calibration.conf.**



# Ideas for the future

I want to build a little Raspberry PI powered WiFi-Attack-Station. Basically plug it into a power-bank or into a wall-socket, wait for the GUI to appear and then either select a WiFi which it shall attack,  or have it auto-attack everything around it. Currently writing a wrapper-script for aircrack-ng's cli tools that wraps the needed steps, after that i'll dig into GUI stuff, which i've never coded this way before. (Only VisualBasic back around 2000)


# Happy hacking yourself!

Merry Christmas and Happy 2017 (maybe not for you US citizens)