AI Assisted Coding: Stop whining and learn the tool

Over the last few years, i’ve immersed myself completely in the AI landscape. I’m not just talking about playing with ChatGPT for fun; i’m talking about high-level consulting, training and running my own models in my own racks here in Nuremberg, Germany, and building scalable services on top of this emerging tech.

Through all of this, if there is one lesson i’ve learned that i need to hammer home, it’s this: it’s by far not all great, but we are also far from the “doom and gloom” scenario that a lot of conservative IT guys make it seem.

The “All-Knowing Oracle” Fallacy

The single biggest issue i see in the industry right now is professionals treating AI as an all-knowing oracle. It is not. It is a probabilistic engine with a hard training cutoff.

If you are asking it for recent frameworks, zero-day exploits, or bleeding-edge library changes without deep research or web search capability enabled, you will get old or wrong information. That is not the AI being “stupid” or “useless”—that is you using the tool incorrectly.

You wouldn’t use a hammer to drive in a screw and then complain that the hammer is broken.

It’s a Tool, Not a Replacement

This technology is nothing more than a new, incredibly powerful tool in your tool belt. It is not a god-tool that solves architecture problems by magic. But, if you take the time to actually master the AI CLI tools, you can command them to control your system with ease or develop boilerplate and logic faster than you can physically type.

The barrier to entry here is the willingness to learn. Learn the specific quirks of the models, learn how to prompt correctly, and mastery will follow.

The Cost of Mastery (and how to mitigate it)

I hear the complaints constantly: “I’m not paying $20 for ChatGPT” or “$200 for the good Claude plan is too much.”

Look, if you want professional results, you need professional tools. However, i know the subscriptions add up, especially when you need access to the absolute top-tier context windows.

Here is a hot tip for the budget-conscious:

You can grab legitimate accounts on marketplaces like G2A for a fraction of the enterprise price.

We are talking about a $200 subscription tier for roughly ~$20.
Disclaimer: You are buying a pre-provisioned account, not upgrading your personal one. Just make sure you understand the trade-offs. No account sharing, but it’s a new login every month.

My Current “Daily Driver” Loadout

The landscape changes weekly, but as of late 2025, this is what i am actually running in production:

1. Context Heavy Lifting:
MiniMax Plus/Max ($20/$50). This has been my daily workhorse for anything requiring massive context. When you need to dump an entire repository into the prompt to refactor a legacy module, this is currently the king.

2. Integration & Logic:
Google Gemini CLI & Gemini 3 Pro. These past couple of days, this has been a standout performer. It handles system integration and logical reasoning surprisingly well, often beating out the others on complex instruction following.

3. The “Classic” Options:
I still maintain subscriptions for Claude Code and OpenAI Codex, but it feels like they are constantly playing games with their user base. They release a ground-breaking model, get everyone hooked, and then silently nerf it to save on backend compute costs once scaling becomes an issue.

I find myself constantly toggling between max subscriptions for both. It’s an endless cycle of GPT 5, 5.1, 5.2, followed by Claude retorting with Sonnet 4.5, Opus 4.5, and Haiku 4.5. The quality fluctuates wildly, so you need to be agile and willing to switch providers.

(Side note: It is genuinely sad that Deepseek and Kimi don’t offer a dedicated code subscription yet. I’d jump on that in a heartbeat.)

A Real World Case Study: The GitLab Upgrade

To prove this isn’t just theoretical, here is a recent win. I successfully used Claude Code to upgrade a self-hosted GitLab instance from v15 to v17.

If you’ve ever managed GitLab, you know the upgrade paths are treacherous. This instance was running on Kubernetes, adding a massive layer of complexity. We even hit a snag where the PostgreSQL version requirements changed mid-stream; i was running an old v13 instance from a time when pg_upgrade automation wasn’t standard yet (though it is now), and it handled that transition flawlessly too.

It took about a day. I didn’t just tell the AI “upgrade this”—that would be suicide. I commanded it to perform backups. I had it verify migration paths. I had it check for deprecations. The AI was the hands, but i was the supervisor.

The result? Zero data loss. Full upgrade completed in about 24 hours. A human team debugging the Helm charts, migration failures, and PostgreSQL schema changes would have likely taken three times as long.

The Golden Rule: Context and Verification

Here is the trick that makes this viable in production: always prompt the AI to verify what it just did after every command.

Do not just “rawdog” a sequence of 20 commands blindly and then discover “whoopsie, i deleted your prod db.” You need to be explicit. Tell it: “This IS the production deployment. Be careful.”

It can be a tremendously effective engineer and problem solver, but only if you let it in on the details.

If you assume it should know that prd5.xyz.gcp... is your production environment, ask yourself: would a trainee or a fresh new hire understand that context immediately? Probably not.

So treat the AI like that new hire. Give it the full context. Tell it the stakes. And force it to verify its own work step-by-step.

The Plumber Analogy

This brings me to the most critical point: Do not use AI for tasks where you cannot judge the output.

It is basically a super-fast typewriter for your ideas. Use it to accelerate your workflow.

However, if you veer off into territory you don’t understand—if you ask it to write kernel modules when you don’t know C, or manage a Kubernetes cluster when you don’t understand pods—you will end up with a destroyed home directory or a compromised production database.

If used correctly, it is a humongous multiplier for any IT professional.
If used incorrectly, it is a liability.

Think of it like a plumber. If a plumber doesn’t know their tools and floods a client’s bathroom, it isn’t the wrench’s fault. It’s the plumber’s fault.

It’s your fault.

If you’re in IT (otherwise you probably would not be reading this), your sole job is to constantly learn new tools and emerging things. If you’re not doing that, then maybe IT isn’t for you?

In any case: this is a new tool and most of us should at least learn how to use it. It will make a massive impact on how much time you actually spend solving problems versus solving the “problems around your actual problems.” We all know the drill: you just wanted to add a user, but some certificate expired, then some policy needed updating from the last version, and suddenly it’s 3 hours later.

Why not just have the AI do that in the background? Send it off, fetch yourself a coffee, call someone, or clear a few 5-minute tasks from your list. The amount of times per day where i can knock out small tasks because the big stuff—the stuff that usually requires huge amounts of focus—now only requires me to review, steer, and manage, is life-changing.

I’ve basically become a manager and reviewer of 10 virtual me’s. By now, i’ve got them almost behaving like me and doing things exactly like i’d do them. Make use of this. Use it to free up your day for other things like playing video games, going outside, or spending time with your pets—or go full 100x engineer and be your own company of 30 (10 virtual employees working round the clock is basically 30 FTEs).

Stop whining, pay the few bucks, and actually learn to use the tools of your trade.

Merry Christmas 🎄

Adobe Photoshop 2024 AI Features - Generative Fill

You’ve probably heard about the new Generative Fill feature in the upcoming Photoshop version. Well i’ve come bearing good news, it’s public, available for everyone with an Adobe Creative Cloud Subscription, and it’s called Adobe Photoshop 2024.

Now i’ve been using the AI features in Photoshop Beta since it came out a couple of months ago, and i must say that i’m very happy about what Adobe has put together.

These AI features almost creep into every other thing Adobe Photoshop does.

Object Select

Now with AI power, it’s even easier to select objects inside your images, no need to use a Lasso for drawing odd shapes anymore.

Editing layers

Having a layer and just editing it’s prompt to get new variations might be the most mind blowing feature of them all.

Editing the Sky

Go to the select menu and pick Sky, now you can use generative fill to make a whole new sky.

so much more

Over time, i think we will find more and more of these helper features that make using Adobe Photoshop really something new.

5 tips to improve your AI Image creation prompts

Yeah i lied, it’s actually 6 things you’re doing wrong.. My bad.

Since y’all probably have used Midjourney V5 in the past 6 months or so, i wanted to share a couple of tips and tricks to improve your Prompts. This works for most AI image generators such als Dall-E, Midjourney, Gencraft, etc.

What are prompts?

I’m pretty certain that 80% of the english speaking IT folks know this, but i’ll do it for the 80% of german IT engineers who have no fucking clue.

Prompts are the thing you tell the AI to do. You prompt it to draw an image of “gay pirate wearing a nazi pyjama”, and it will do so. The prompt is your Input for most of the current Transformer based models.

Tip 1 - make your prompts shorter

Having ver long prompts doesn’t help. It’s not like horsepower where more is better, it’s about being precise and concise. Almost like talking to your girlfriend. The more you talk, the deeper the trouble gets. Just stay on point.

Tip 2 - not providing enough constraints

Same thing, if you don’t give your girlfriend any constraints, then god knows, she’ll start buying import beer or Rosé. Same goes for AI image generators. Open ended prompts without guidance will produce random results, nothing you can repeat or use. For example “Spaceship” or “Airplane” might be too general, try something like “an American Airlines Airplane” or “a futuristic spaceship, star trek style, entering a wormhole”.

Tip 3 - ambiguous language

You need to be very clear, or your Girlfriend^WAI will have issues. Avoid pronouns and implied subjects that can confuse the Girlfriend^WAI.

Tip 4 - abstract concepts

Just like your girlfriend, abstract concepts make the AI’s head hurt. It has no idea about emotions, so instead of having your a AI prompt be like “Painting of a happy female” be “Painting of a smiling female”. AI will the know what smiling is, but what emotion/abstract concept lies behind “happy” might be going too far.

Tip 5 - being impatient

So the first result that Dall-E or Midjourney yielded isn’t perfect. You’re giving up of course… But nobody told you that it can take several attempts, and especially learning, for how to craft your prompts correctly.

Depending on the model your AI has been trained on, you will need to play around to become proficient. Have you ever heard of the 8 year old car mechanic? Exactly.

Tip 6 - use ChatGPT to create your ideal prompts

Now here comes the kicker. Instead of retraining your brain to write these stupid prompts, you could also just ask ChatGPT (gpt-4) to write them for you.

Priming ChatGPT

1
You are an AI model developed by OpenAI, and I’m introducing you to Midjourney, an AI tool that generates images based on text prompts. Your task is to assist me in creating these prompts. Are you ready?

Examples

You should pass ChatGPT some examples of good prompts, for it to know what you’re looking for.

1
2
3
4
Some examples of really good prompts are:
- “A serene mountain landscape covered in a blanket of snow, rendered in a minimalist style, capturing the subtle shades of white and blue under the soft morning light.”
- “A futuristic cityscape at night, illuminated by neon lights and towering skyscrapers, depicted in a cyberpunk art style with a strong emphasis on the contrast between the dark shadows and vibrant glow.”
- “A bustling market scene in a bustling Asian city, portrayed with a lively and vibrant palette, highlighting the intricate patterns of textiles and the busy atmosphere with dynamic brushstrokes.”

For instance, you can ask:

1
Can you generate a prompt for an image of a beach under a starry night sky?

Or you can phrase your query like this:

1
Title: “A beach under a starry night sky.”

ChatGPT will then create a prompt based on the provided concept:

1
Prompt: “A tranquil beach scene beneath a studded night sky, rendered in an impressionistic style. The sands are soft and glistening, reflecting the brilliant stars above. The gentle waves ebb and flow with a mesmerizing rhythm, the whitecaps gleaming under the stellar glow. The calm serenity of the ocean and the enigmatic vastness of the starry night sky paint a breathtaking tableau.”

Switching to hexo.io

After trying to upgrade my Jekyll version from two years ago, and failing miserably, i decided to look for alternatives.

Meet Hexo.io

What is it?

The entire migration took me (incl. overdoing some pages and adding new ones) about 3 hours.

Since i am running nixos on my Macs, it’s easily doable with a shell.nix.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
{ pkgs ? import <nixpkgs> {} }:

let
lib = import <nixpkgs/lib>;
NODE_MODULES_PREFIX = toString ./node_modules;

in pkgs.mkShell {
packages = with pkgs; [
nodejs
nodePackages.npm
];

inherit NODE_MODULES_PREFIX;

shellHook = ''
export PATH="$PATH:$NODE_MODULES_PREFIX/hexo-cli/bin"
npm install
'';
}

Now i can just cd into my repo and run nix-shell and have hexo ready in my path, ready to make new posts!

OpenSMTPD and Dovecot with a shared PostgreSQL, Sieve and RSpamd on OpenBSD 6.6

I finally got around to setting up a new mailserver and i decided to give OpenSMTPD a try. It wasn’t a natural birth, i can tell you that. The switching of the configuration syntax makes for a lot of outdated Google Search results.

So what are we going to setup. Well the title gave it away i guess, so for the slow ones amongst you: we are building a Mailserver with OpenSMTPD, Dovecot, RSpamd and Sieve. The OpenSMTPD and the Dovecot will both be using the same authentication table and hashing scheme, making this a nifty solution.

Installing the required components

1
pkg_add postgresql-server opensmtpd-extras opensmtpd-extras-pgsql opensmtpd-filter-rspamd opensmtpd-filter-senderscore rspamd dovecot dovecot-pigeonhole dovecot-postgresql redis

Enabling them on boot

1
2
3
4
5
6
7
8
rcctl enable httpd
rcctl enable smtpd
rcctl enable postgresql
rcctl enable rspamd
rcctl enable dovecot
rcctl start dovecot
rcctl enable redis
rcctl start redis

Setting up DNS

This has been explained in numerous posts on the Internet, you should by now know how to setup an MX Record (maybe SPF and DKIM).

Setting up Let’s Encrypt SSL Certificates

/etc/httpd.conf

Configure httpd to do the acme challenges.

1
2
3
4
5
6
7
8
9
10
server "replace.with.host.name" {
listen on * port 80
location "/.well-known/acme-challenge/*" {
root "/acme"
request strip 2
}
location "/" {
block return 301 "https://$SERVER_NAME$REQUEST_URI"
}
}

And then start httpd:

1
rcctl start httpd

/etc/acme-client.conf

Now we go on to configure the acme-client.

1
2
3
4
5
6
7
8
9
10
11
12
13
api_url="https://acme-v02.api.letsencrypt.org/directory"
authority letsencrypt {
api url $api_url
account key "/etc/acme/letsencrypt-privkey.pem"
}

domain replace.with.host.name {
#alternative names { www.replace.with.host.name }
domain key "/etc/ssl/private/replace.with.host.name.key"
#domain certificate "/etc/ssl/replace.with.host.name.crt"
domain full chain certificate "/etc/ssl/replace.with.host.name.crt"
sign with letsencrypt
}

Obtaining a certificate

1
acme-client -v replace.with.host.name

Adding certificate renewal to cron

Enter the crontab with crontab -e and add the following line:

1
30      0       *       *       *       /usr/sbin/acme-client replace.with.host.name && /usr/sbin/rcctl restart smtpd && /usr/sbin/rcctl restart dovecot

Preparations for our services

/etc/login.conf

Go ahead and add the following lines at the end of your /etc/login.conf:

1
2
3
4
5
6
7
8
dovecot:\
:openfiles-cur=1024:\
:openfiles-max=4096:\
:tc=daemon:

postgresql:\
:openfiles=768:\
:tc=daemon:

Once done, have the file cap_mkdb’d like this:

1
cap_mkdb /etc/login.conf

/etc/sysctl.conf

Append the following values to /etc/sysctl.conf so PostgreSQL has a bit of breathing room:

1
2
kern.seminfo.semmni=60
kern.seminfo.semmns=1024

Then go on to actually setting them in the kernel:

1
sysctl -w kern.seminfo.semmni=60 kern.seminfo.semmns=1024

Adding a vmail user and group

1
useradd -m -d /var/vmail -s /sbin/nologin vmail

Preparing PostgreSQL

1
2
3
4
5
su - _postgresql
mkdir /var/postgresql/data
initdb -D /var/postgresql/data -U postgres -A scram-sha-256 -E UTF8 -W
exit
rcctl start postgresql

Next we are going to add a user, a database, two tables and three views:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
psql -Upostgres <<EOF
CREATE USER mail WITH ENCRYPTED PASSWORD 'your.mail.password';
CREATE DATABASE mail OWNER mail;
EOF

psql -Umail mail <<EOF

-- this is the table for the users accounts
CREATE TABLE public.accounts (
id serial,
email character varying(255) DEFAULT ''::character varying NOT NULL,
password character varying(255) DEFAULT ''::character varying NOT NULL,
active boolean DEFAULT true NOT NULL
);

-- this is the table for the virtual mappings for email -> email
CREATE TABLE public.virtuals (
id serial,
email character varying(255) DEFAULT ''::character varying NOT NULL,
destination character varying(255) DEFAULT ''::character varying NOT NULL
);

-- this view is used to determine where to deliver things
CREATE VIEW public.delivery AS
SELECT virtuals.email,
virtuals.destination
FROM public.virtuals
WHERE (length((virtuals.email)::text) > 0)
UNION
SELECT accounts.email,
'vmail'::character varying AS destination
FROM public.accounts
WHERE (length((accounts.email)::text) > 0);

-- this view is used to determine which domains this server is serving
CREATE VIEW public.domains AS
SELECT split_part((virtuals.email)::text, '@'::text, 2) AS domain
FROM public.virtuals
WHERE (length((virtuals.email)::text) > 0)
GROUP BY (split_part((virtuals.email)::text, '@'::text, 2))
UNION
SELECT split_part((accounts.email)::text, '@'::text, 2) AS domain
FROM public.accounts
WHERE (length((accounts.email)::text) > 0)
GROUP BY (split_part((accounts.email)::text, '@'::text, 2));

-- this view should control the email addresses users can send with
CREATE VIEW public.sending AS
SELECT virtuals.email,
virtuals.destination AS login
FROM public.virtuals
WHERE (length((virtuals.email)::text) > 0)
UNION
SELECT accounts.email,
accounts.email AS login
FROM public.accounts
WHERE (length((accounts.email)::text) > 0);
EOF

/etc/mail/postgres.conf

Next we configure the PostgreSQL lookups for smtpd:

1
2
3
4
5
conninfo host='localhost' user='mail' password='your.mail.password' dbname='mail'
query_alias SELECT "destination" FROM delivery WHERE "email"=$1;
query_credentials SELECT "email", "password" FROM accounts WHERE "email"=$1;
query_domain SELECT "domain" FROM domains WHERE "domain"=$1;
query_mailaddrmap SELECT "email" FROM sending WHERE "login"=$1;

Also since this file contains the password to the database, only _smtp should be able to read it:

1
2
chown _smtpd:_smtpd /etc/mail/postgres.conf
chmod o= /etc/mail/postgres.conf

/etc/mail/smtpd.conf

Now we can go ahead and configure OpenSMTPD:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
table aliases file:/etc/mail/aliases
table auths postgres:/etc/mail/postgres.conf
table domains postgres:/etc/mail/postgres.conf
table virtuals postgres:/etc/mail/postgres.conf
table sendermap postgres:/etc/mail/postgres.conf


pki replace.with.host.name cert "/etc/ssl/replace.with.host.name.crt"
pki replace.with.host.name key "/etc/ssl/private/replace.with.host.name.key"


filter check_dyndns phase connect match rdns regex { '.*\.dyn\..*', '.*\.dsl\..*' } \
disconnect "550 no residential connections"

filter check_rdns phase connect match !rdns \
disconnect "550 no rDNS is so 80s"

filter check_fcrdns phase connect match !fcrdns \
disconnect "550 no FCrDNS is so 80s"

filter senderscore \
proc-exec "filter-senderscore -blockBelow 10 -junkBelow 70 -slowFactor 5000"

filter rspamd proc-exec "filter-rspamd"


listen on all tls pki replace.with.host.name filter { check_dyndns, check_rdns, check_fcrdns, senderscore, rspamd }
#listen on all port smtps smtps pki replace.with.host.name auth <auths> senders <sendermap> masquerade
#listen on all port submission tls-require pki replace.with.host.name auth <auths> senders <sendermap> masquerade
listen on all port smtps smtps pki replace.with.host.name auth <auths>
listen on all port submission tls-require pki replace.with.host.name auth <auths>

action "receive_aliases" lmtp "/var/dovecot/lmtp" rcpt-to alias <aliases>
match from local for local action "receive_aliases"

action "receive_vmail" lmtp "/var/dovecot/lmtp" rcpt-to virtual <virtuals>
match from any for domain <domains> action "receive_vmail"

action "outbound" relay helo replace.with.host.name
match from auth for any action "outbound"

And finally start the smtpd:

1
rcctl start smtpd

/etc/rspamd/worker-proxy.inc

In this file i actually just changed the spam_header to X-Spam-Status, but this optional.

/etc/rspamd/local.d/greylist.conf

1
2
3
4
greylist {
servers = "127.0.0.1:6379";
timeout = 1min;
}

Then we start rspamd:

1
rcctl start rspamd

Diff style below here

I’ve chosen to only put in things you need to change or append, everything else should remain as is.

Why did i do this? Well since dovecot has evolved into this nice configuration-file layout, i decided that this is the most efficient way to keep this document clean and relevant.

/etc/dovecot/conf.d/10-auth.conf

Towards the beginning of the file, disable plaintext authentication:

1
disable_plaintext_auth = yes

Then at the end of the file, there are several includes. We are going to comment the auth-system.conf.ext and are going to comment in the auth-sql.conf.ext instead:

1
2
#!include auth-system.conf.ext
!include auth-sql.conf.ext

/etc/dovecot/conf.d/10-mail.conf

We change the mailbox location to our vmail directory.

1
mail_location = maildir:/var/vmail/%u

/etc/dovecot/conf.d/10-master.conf

Next we are going to comment in our SSL listeners, feel free to leave in 143 and 110 as they are using STARTTLS:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
service imap-login {
inet_listener imap {
port = 143
}
inet_listener imaps {
port = 993
ssl = yes
}
}

service pop3-login {
inet_listener pop3 {
port = 110
}
inet_listener pop3s {
port = 995
ssl = yes
}
}

Then make sure lmtp is configured with the correct permissions:

1
2
3
4
5
6
7
service lmtp {
unix_listener lmtp {
mode = 0660
user = vmail
group = vmail
}
}

/etc/dovecot/conf.d/10-ssl.conf

Now we are going to configure SSL:

1
2
3
4
5
6
ssl = required

ssl_cert = </etc/ssl/replace.with.host.name.crt
ssl_key = </etc/ssl/private/replace.with.host.name.key

ssl_prefer_server_ciphers = yes

/etc/dovecot/conf.d/15-lda.conf

Next up is our local delivery agent (LDA):

1
2
3
4
postmaster_address = postmaster@your.domain
hostname = replace.with.host.name

lda_mailbox_autocreate = yes

/etc/dovecot/conf.d/20-lmtp.conf

Now we configure our LMTP for Sieve:

1
2
3
4
protocol lmtp {
# Space separated list of plugins to load (default is global mail_plugins).
mail_plugins = $mail_plugins sieve
}

/etc/dovecot/conf.d/90-plugin.conf

Here we configure the Sieve plugin itself:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
plugin {
sieve_plugins = sieve_imapsieve sieve_extprograms
sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.environment

imapsieve_mailbox1_name = Junk
imapsieve_mailbox1_causes = COPY APPEND
imapsieve_mailbox1_before = file:/usr/local/lib/dovecot/sieve/report-spam.sieve

imapsieve_mailbox2_name = *
imapsieve_mailbox2_from = Junk
imapsieve_mailbox2_causes = COPY
imapsieve_mailbox2_before = file:/usr/local/lib/dovecot/sieve/report-ham.sieve

imapsieve_mailbox3_name = Inbox
imapsieve_mailbox3_causes = APPEND
imapsieve_mailbox3_before = file:/usr/local/lib/dovecot/sieve/report-ham.sieve

sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve
}

/usr/local/lib/dovecot/sieve/report-spam.sieve

Here we are going to fill in the default action for when we move files to spam folder. In our case we learn them as spam:

1
2
3
4
5
6
7
require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables"];

if environment :matches "imap.user" "*" {
set "username" "${1}";
}

pipe :copy "sa-learn-spam.sh" [ "${username}" ];

/usr/local/lib/dovecot/sieve/sa-learn-spam.sh

Fill the file with the following contets:

1
2
#!/bin/sh
exec /usr/local/bin/rspamc -d "${1}" learn_spam

/usr/local/lib/dovecot/sieve/report-ham.sieve

Here we are going to fill in the default action for when we move files out of the spam folder. In our case we learn them as ham:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables"];

if environment :matches "imap.mailbox" "*" {
set "mailbox" "${1}";
}

if string "${mailbox}" "Trash" {
stop;
}

if environment :matches "imap.user" "*" {
set "username" "${1}";
}

pipe :copy "sa-learn-ham.sh" [ "${username}" ];

/usr/local/lib/dovecot/sieve/sa-learn-ham.sh

Fill the file with the following contets:

1
2
#!/bin/sh
exec /usr/local/bin/rspamc -d "${1}" learn_ham

making them both executable

To make them runnable by the system, we make them executable:

1
chmod +x /usr/local/lib/dovecot/sieve/*.sh

/etc/dovecot/conf.d/90-sieve.conf

And here we configure the sieve folder that gets used for everyone.

1
sieve_before = /var/vmail/sieve/

/var/vmail/sieve/junk.sieve

First create that folder:

1
2
mkdir -p /var/vmail/sieve
chown -R vmail:vmail /var/vmail

Then we sieve away the spam:

1
2
3
4
5
require "fileinto";
if header :contains "X-Spam-Status" "YES" {
fileinto "Junk";
stop;
}

/etc/dovecot/conf.d/auth-sql.conf.ext

Here we configure our override fields, so we don’t have to do an ugly select:

1
2
3
4
5
userdb {
driver = sql
args = /etc/dovecot/dovecot-sql.conf.ext
override_fields = uid=vmail gid=vmail home=/var/vmail/%u
}

/etc/dovecot/dovecot-sql.conf.ext

Now are almost done, we now configure how dovecot accesses the database. Append this to the end:

1
2
3
4
5
6
driver = pgsql
connect = dbname=mail user=mail password=your.mail.password
default_pass_scheme = CRYPT

password_query = SELECT email AS user, '{CRYPT}' || password AS password FROM accounts WHERE active = true AND email = '%u' AND email != '' AND password != ''
user_query = SELECT email FROM delivery WHERE email = LOWER('%u')

/etc/dovecot/dovecot.conf

Last but not least we update the protocols we are going to use:

1
protocols = imap pop3 lmtp

And finally start the dovecot:

1
rcctl start dovecot

Adding Accounts and Aliases

To generate securely hashed passwords, you can use “smtpctl encrypt” and then enter your password. The resulting hash can be used as replacement for PASSWORD:

1
2
INSERT INTO accounts (email,password) VALUES ('my@first.email.address','PASSWORD');
INSERT INTO virtuals (email,destination) VALUES ('my@second.mail.address','my@first.email.address');

That’s it

You should now be able to use this setup as expected.

If you find any errors, you can find me on Twitter and let me know!

Call Of Duty: Warzone - My dirty tricks

Here is my list of dirty little secrets that make the game more fun and yield more kills.

Sending Players to the Gulag before they hit the ground

  • while parachuting with other players beside you, cut your parachute and start shooting at them with your weapon.
  • skydive to a helicopter fast, get in and fly into slow players with parachutes effectively sending them to the Gulag before they hit the ground.

Effectively using the time in the Gulag, when watching from above

  • spray as many your opponents as you can to make them better visible during the fight
  • if you see a player down there placing a Claymore Mine, try throwing a rock at the mine to set it off into his face
  • if a player down there gets hit and isn’t done yet, you can try to kill him by throwing rocks at his head

On the Battlefield

  • place a Trophy system on your Jeep or into your Helicopter for that automatic defense system
  • when three people on the same team call for a drone at the same time, your team will get an advance drone for the remainder of the
    game showing everyone on the map

if you have more of these

Just go ahead and contact me on Twitter and i will add your tips with credits here.

Now go out and use it wisely. ;)

cheers

PostgreSQL custom sorting made easy

Every developer knows the pain of sorting database rows by some custom field. The easiest Example of this are DNS Records.

Today i’ve come across a solution using array_position. Basically you can pass it the Order of elements you want on top, and the rest will be below.

It works like this

1
2
SELECT hostname, type, contentn FROM records AS r
ORDER BY type, array_position(array["SOA"::varchar, NS"::varchar], r.type)

Now all records having type SOA will be first, type NS second and type A third. Everything else will come after.

The important part

Make sure you typecast your custom array into the type of the column you are ordering against.
In my case the type column is a varchar, that’s why i am casting all elements to it.

I hope this helps some of you avoiding to write custom sorting in code.

Switching to Jekyll

For almost 8 years i have ran this blog on my own blog engine, println. While at the beginning it gave me all i needed, in recent years my behavior has changed, and thus my preferences. I don’t like editing blog-posts in the browser anymore. I got even more used to vi (neovim specifically) than i was before. So it was time for a change.

After a bit of exporting my old PostgreSQL database for my blog into Jekyll format, i am happy to present the result. I hope i got all the permalinks right, but i guess i will find out.

I really hope that this motivates me to write more frequently in the future again.

cheers

Using Juniper JunOS apply-groups for IXPs (like AMS-IX or DECIX)

So recently i’ve been cleaning out configurations on our network equipment, in order to get rid of technical debt. Two of these missions were simplifying our Switch and Router Configurations. This has been on my todo-list forever, but i hardly ever got to researching it.

The Problem

If you’re either operating JunOS Switches or Routers, you probably have come across a lot of duplicate configuration. Imagine a client (let’s call him “Acme Corp”) has 2 Switchports on one of your EX Series switches configured. Usually this would look something like this:

ge-0/0/0 {
    description "Acme Corp - Server 1 - Port 0";
    unit 0 {
        family ethernet-switching {
            port-mode trunk;
            vlan {
                members public, acme-private;
            }
        }
    }
}
ge-0/0/1 {
    description "Acme Corp - Server 1 - Port 1";
    unit 0 {
        family ethernet-switching {
            port-mode trunk;
            vlan {
                members public, acme-private;
            }
        }
    }
}

There is nothing wrong with that, but this gets you a lot of configuration lines very fast, which makes it a little hard to maintain in my opinion.

Same goes for BGP peers, your configuration for AMS-IX peers will repeat itself over and over again.

group amsix-v4-rs {
    type external;
    description "AMS-IX IPv4 Route Servers";
    local-preference 200;
    import peer-in;
    family inet {
        unicast;
    }
    export peer-out;
    remove-private;
    peer-as 6777;
    neighbor 80.249.208.255;
    neighbor 80.249.209.0;
}
group amsix-v6-rs {
    type external;
    description "AMS-IX IPv6 Route Servers";
    local-preference 200;
    import peer-in;
    family inet6 {
        unicast;
    }
    export peer-out;
    remove-private;
    peer-as 6777;
    neighbor 2001:7f8:1::a500:6777:1 {
        description rs1.ams-ix.net;
    }
    neighbor 2001:7f8:1::a500:6777:2 {
        description rs2.ams-ix.net;
    }
}

Here again, lots of configuration repeating itself (apart from these two being v4 and v6 mixed). But overall, lots of stuff gets repeated for BGP peers over and over again, which makes changes to policies a tedious task, where you have to update every single BGP peer.

How to do it right cleanly then?

I’m guessing (by the fact that you visited this blog post), that apply-groups are a new thing to you, so i’m gonna explain it a bit in a dummy way, probably here and there things that could be better, but this works exceptionally well for me.

How would the Switch config look like, with apply-groups?

First we would set the apply groups:

groups {
    ACME-SERVER {
        interfaces {
            <*> {
                description "Acme Corp Server Interface";
                unit 0 {
                    family ethernet-switching {
                        port-mode trunk;
                        vlan {
                          members public, acme-private;
                        }
                    }
                }
            }
        }
    }
}

then configure the interfaces

interfaces {
   ge-0/0/0 {
        description "Acme Corp - Server 1 - Port 0";
        apply-groups ACME-SERVER;
    }
    ge-0/0/1 {
        description "Acme Corp - Server 1 - Port 1";
        apply-groups ACME-SERVER;
    }
}

This makes it so much easier to tag Switchports for various types of configurations, without having to keep track of all the changes across each interface.

How would a BGP config look like?

Again we set up the apply groups:

groups {
    AMSIX-BGP-v4 {
        protocols {
            bgp {
                group <*> {
                    type external;
                    description "AMS-IX BGP Peer";
                    local-preference 200;
                    import peer-in;
                    family inet {
                        unicast;
                    }
                    export peer-out;
                    remove-private;
                }
            }
        }
    }
}

Now our BGP Peer group section looks like this:

protocols {
    bgp {
        group amsix-v4-rs {
            apply-groups AMSIX-BGP-v4;
            description "AMS-IX IPv4 Route Servers";
            peer-as 6777;
            neighbor 80.249.208.255;
            neighbor 80.249.209.0;
        }
    }
}

What we learned

You now know, how to easily manage templates on JunOS configuration sections. This knowledge also applies to all other configuration areas, as far as i know. It’s not limited to these 2 scenarios, so feel free to play around with it. :)

Thanks for reading

Simplistic Auto Provisioning for BSDs, UNIX and Linux, using just DHCP

For a few weeks now i’ve been thinking about better tools to provision our bare-metal servers and VMs. All tools out there are IMHO bloatware. Over-complicated stuff where nobody knows when the next library upstream will break feature X which will prevent shit from working. Typical wobbly constructs we have these days. I’m not a fan of them, you shouldn’t be either.

But yesterday noon i read one more of these guides to setup something which wants you to curl their installer and pipe it through bash, YIKES.

Then in my typical haze, i decided to play a little mind-game, WHEN would be a moment where this curl | bash scenario would be valid, or at least a bearable solution? Of course! A solution to my previous provisioning dilemma presented itself…

What you need

  • a HTTP Server (nginx, apache, anything that can serve a file)
  • a DHCP Server where you can define custom fields (dnsmasq, kea, isc-dhcpd, ..)
  • a DHCP Client which lets you parse custom fields (dhcpcd, isc-dhcpc. NOT Henning Brauer’s dhclient)

The quick gist

  • DHCP server sends out custom field with URL inside
  • DHCP client picks up that field, processes it in hook with curl | sh

WARNING: THIS IS POTENTIALLY DANGEROUS! THIS IS KEPT SIMPLE FOR THE SAKE OF THIS HOWTO

BETTER APPROACH: gpg sign the script (even when auto-generated) on the server side, and have the client verify the signature against the pubkey.

How to do it

First configure your DHCP Server to deliver a custom field in the reserved range (upwards of 200 i think, but check before you decide). In the Payload we just stick in an URL that can be reached from a DHCP client.

dnsmasq.conf

dhcp-option-force=254,http://192.168.0.1/bootstrap.sh

dhcpd.conf

option server-bootstrap code 254 = string;
subnet 192.168.0.0 netmask 255.255.255.0 {
    [...]
    option server-bootstrap "http://192.168.0.1/bootstrap.sh";
}

Client configuration

Next you need to slightly modify your client’s setup, i’ve only used dhcpcd for this, as FreeBSD’s and OpenBSD’s default dhclient, can’t do custom fields anymore, they all get filtered and there is no configuration for it anymore.

dhcpcd

On FreeBSD, i’ve placed a dhcpcd.enter-hook script at /usr/local/etc/dhcpcd.enter-hook

#!/bin/sh

# for security reasons, you should really check here if bootstrapping is required
# you don't want anyone pushing bad scripts that get executed by a rogue dhcp server
if [ "${new_bootstrap}" != "" ]; then
    TMP=$(mktemp)
    fetch -o ${TMP} ${new_bootstrap}
    # for more security, you might also want to gpg sign your script and have gpg verify it here
    sh ${TMP} || exit 1
fi

Last we need to modify dhcpcd.conf to request the extra field, so it gets delievered by the DHCP Server. I just added those two lines to the default:

define 254 string bootstrap
option bootstrap

bootstrap.sh hosted on the HTTP Server

This is our bootstrapping shell script. This could be anything, there could be many of these for each profile, there could also be a rendering process on the server side, whatever floats your boat. Mine is just a basic sample to get the idea across:

#!/bin/sh

echo
echo
echo "first: do some meaningful diagnosis/inventory here"
echo "  like posting dmidecode and other stuff to your remote"
echo
echo "second: if this is used to bootstrap bare metal machines booting pxe"
echo "  IMPORTANT: check for existing installations on your disk"
echo "             like is there a partitioning scheme already here?"
echo "  then you could go ahead and install whatever you want"
echo
echo "third: enroll this system into configuration management like CFengine"
echo "  like: cf-agent -B your.cf.host && cf-agent -KIC"
echo
echo "sleeping 10 seconds... then just running some wall command"
sleep 10
echo "dhcp-bootstrapping sez HELLO KITTENS!"|wall

The result

Output from running dhcpcd em0 shows that it works :)

DUID 00:01:00:01:21:6f:7f:9e:08:00:27:d7:7f:f9
em0: IAID 27:d7:7f:f9
em0: rebinding lease of 192.168.168.80
em0: leased 192.168.168.80 for 7200 seconds
em0: changing route to 192.168.168.0/24
em0: changing default route via 192.168.168.1
/tmp/tmp.BYYgx9dr                             100% of  691  B 1670 kBps 00m00s


first: do some meaningful diagnosis/inventory here
  like posting dmidecode and other stuff to your remote

second: if this is used to bootstrap bare metal machines booting pxe
  IMPORTANT: check for existing installations on your disk
             like is there a partitioning scheme already here?
  then you could go ahead and install whatever you want

third: enroll this system into configuration management like CFengine
  like: cf-agent -B your.cf.host && cf-agent -KIC

sleeping 10 seconds... then just running some wall command

Broadcast Message from root@test
        (/dev/pts/0) at 15:45 CEST...

dhcp-bootstrapping sez HELLO KITTENS!

forked to background, child pid 34507
root@test:~ #

Final thoughts

These very simple elements thrown together in the right way, make up for a very reliable and especially maintainable setup! No wiggly parts, no extra software you don’t have running anyways. Just plain old Ops-Tech put together the right way. Easy to investigate with tools you already know, easy to customize the heck out of it.

I hope this helps some of you to build better, more reliable and easier to maintain systems.