Over the last few years, i’ve immersed myself completely in the AI landscape. I’m not just talking about playing with ChatGPT for fun; i’m talking about high-level consulting, training and running my own models in my own racks here in Nuremberg, Germany, and building scalable services on top of this emerging tech.
Through all of this, if there is one lesson i’ve learned that i need to hammer home, it’s this: it’s by far not all great, but we are also far from the “doom and gloom” scenario that a lot of conservative IT guys make it seem.
The “All-Knowing Oracle” Fallacy
The single biggest issue i see in the industry right now is professionals treating AI as an all-knowing oracle. It is not. It is a probabilistic engine with a hard training cutoff.
If you are asking it for recent frameworks, zero-day exploits, or bleeding-edge library changes without deep research or web search capability enabled, you will get old or wrong information. That is not the AI being “stupid” or “useless”—that is you using the tool incorrectly.
You wouldn’t use a hammer to drive in a screw and then complain that the hammer is broken.
It’s a Tool, Not a Replacement
This technology is nothing more than a new, incredibly powerful tool in your tool belt. It is not a god-tool that solves architecture problems by magic. But, if you take the time to actually master the AI CLI tools, you can command them to control your system with ease or develop boilerplate and logic faster than you can physically type.
The barrier to entry here is the willingness to learn. Learn the specific quirks of the models, learn how to prompt correctly, and mastery will follow.
The Cost of Mastery (and how to mitigate it)
I hear the complaints constantly: “I’m not paying $20 for ChatGPT” or “$200 for the good Claude plan is too much.”
Look, if you want professional results, you need professional tools. However, i know the subscriptions add up, especially when you need access to the absolute top-tier context windows.
Here is a hot tip for the budget-conscious:
You can grab legitimate accounts on marketplaces like G2A for a fraction of the enterprise price.
We are talking about a $200 subscription tier for roughly ~$20.
Disclaimer: You are buying a pre-provisioned account, not upgrading your personal one. Just make sure you understand the trade-offs. No account sharing, but it’s a new login every month.
My Current “Daily Driver” Loadout
The landscape changes weekly, but as of late 2025, this is what i am actually running in production:
1. Context Heavy Lifting:
MiniMax Plus/Max ($20/$50). This has been my daily workhorse for anything requiring massive context. When you need to dump an entire repository into the prompt to refactor a legacy module, this is currently the king.
2. Integration & Logic:
Google Gemini CLI & Gemini 3 Pro. These past couple of days, this has been a standout performer. It handles system integration and logical reasoning surprisingly well, often beating out the others on complex instruction following.
3. The “Classic” Options:
I still maintain subscriptions for Claude Code and OpenAI Codex, but it feels like they are constantly playing games with their user base. They release a ground-breaking model, get everyone hooked, and then silently nerf it to save on backend compute costs once scaling becomes an issue.
I find myself constantly toggling between max subscriptions for both. It’s an endless cycle of GPT 5, 5.1, 5.2, followed by Claude retorting with Sonnet 4.5, Opus 4.5, and Haiku 4.5. The quality fluctuates wildly, so you need to be agile and willing to switch providers.
(Side note: It is genuinely sad that Deepseek and Kimi don’t offer a dedicated code subscription yet. I’d jump on that in a heartbeat.)
A Real World Case Study: The GitLab Upgrade
To prove this isn’t just theoretical, here is a recent win. I successfully used Claude Code to upgrade a self-hosted GitLab instance from v15 to v17.
If you’ve ever managed GitLab, you know the upgrade paths are treacherous. This instance was running on Kubernetes, adding a massive layer of complexity. We even hit a snag where the PostgreSQL version requirements changed mid-stream; i was running an old v13 instance from a time when pg_upgrade automation wasn’t standard yet (though it is now), and it handled that transition flawlessly too.
It took about a day. I didn’t just tell the AI “upgrade this”—that would be suicide. I commanded it to perform backups. I had it verify migration paths. I had it check for deprecations. The AI was the hands, but i was the supervisor.
The result? Zero data loss. Full upgrade completed in about 24 hours. A human team debugging the Helm charts, migration failures, and PostgreSQL schema changes would have likely taken three times as long.
The Golden Rule: Context and Verification
Here is the trick that makes this viable in production: always prompt the AI to verify what it just did after every command.
Do not just “rawdog” a sequence of 20 commands blindly and then discover “whoopsie, i deleted your prod db.” You need to be explicit. Tell it: “This IS the production deployment. Be careful.”
It can be a tremendously effective engineer and problem solver, but only if you let it in on the details.
If you assume it should know that prd5.xyz.gcp... is your production environment, ask yourself: would a trainee or a fresh new hire understand that context immediately? Probably not.
So treat the AI like that new hire. Give it the full context. Tell it the stakes. And force it to verify its own work step-by-step.
The Plumber Analogy
This brings me to the most critical point: Do not use AI for tasks where you cannot judge the output.
It is basically a super-fast typewriter for your ideas. Use it to accelerate your workflow.
However, if you veer off into territory you don’t understand—if you ask it to write kernel modules when you don’t know C, or manage a Kubernetes cluster when you don’t understand pods—you will end up with a destroyed home directory or a compromised production database.
If used correctly, it is a humongous multiplier for any IT professional.
If used incorrectly, it is a liability.
Think of it like a plumber. If a plumber doesn’t know their tools and floods a client’s bathroom, it isn’t the wrench’s fault. It’s the plumber’s fault.
It’s your fault.
If you’re in IT (otherwise you probably would not be reading this), your sole job is to constantly learn new tools and emerging things. If you’re not doing that, then maybe IT isn’t for you?
In any case: this is a new tool and most of us should at least learn how to use it. It will make a massive impact on how much time you actually spend solving problems versus solving the “problems around your actual problems.” We all know the drill: you just wanted to add a user, but some certificate expired, then some policy needed updating from the last version, and suddenly it’s 3 hours later.
Why not just have the AI do that in the background? Send it off, fetch yourself a coffee, call someone, or clear a few 5-minute tasks from your list. The amount of times per day where i can knock out small tasks because the big stuff—the stuff that usually requires huge amounts of focus—now only requires me to review, steer, and manage, is life-changing.
I’ve basically become a manager and reviewer of 10 virtual me’s. By now, i’ve got them almost behaving like me and doing things exactly like i’d do them. Make use of this. Use it to free up your day for other things like playing video games, going outside, or spending time with your pets—or go full 100x engineer and be your own company of 30 (10 virtual employees working round the clock is basically 30 FTEs).
Stop whining, pay the few bucks, and actually learn to use the tools of your trade.
Merry Christmas 🎄