Why did the JVM start to suck?
When i started using the JVM, i was happy that my application and it's virtual machine/runtime would be separate parts. After 9 Years of coding nearly full-time Scala, i've come to hate it. Why?
Because the variance in the JVM makes it extremely hard to make predictable applications. One Version does this, the next breaks that, so from a quality coder perspective, you have to work around your runtimes issues and capabilities.
Next up in order to use the latest features like TLS SNI (which isn't really cutting edge in the wake of TLS 1.3), you need to keep your JVM/Runtime up to date, everywhere you want to run that feature. (TLS SNI was Java7->8)
If you're a coder with no Ops-responsibilities, this might seem acceptable to you, but i have to care about operating the code that i write, just as much as i have to care about the code itself!
So what makes golang (imho) superior?
You get a statically linked binary. No Runtime, no nothing installed.
This is especially awesome from a deployment standpoint, as you only need to take care of your binary and it's assets (if any).
Also noteworthy, since my Scala/Java .jars (with all dependencies bundled) were rarely less than 60MB, on top of a 500MB+ JVM, that makes for a lot of wasted disk-space and things that need regular updating. My golang binaries have rarely more than 13MB, all together.
Last but not least, scala-sbt sucks donkey balls. Straight up. In my opinion, it is the single worst build tool EVER conceived by a human! Regularly breaking backward compatibility, requiring me to deal with new plugins and shit, HORRIBLE!
I want a build tool that just builds my code and churns out a usable binary form.
Which is what the 'go' tool actually does. Apart from it's super feature-richness like testing, fuzzing and all that nice stuff, it also builds code reliable and without much of a config file that i need to keep in shape! A stupid simple Makefile suffices for all my needs so far.
Also, when i needed Disk-Space previously on Scala/JVM, rm -rf ~/.ivy2 solved most of this, since all your dependency jars pulled from sbt live there. But once you do that, maybe you should look for another career, since it's likely that some artifacts/jars might not be available anymore, breaking your build. As opposed to Golang, where i just git clone my dependency-source into my repository, add it either as submodule to git or just straight up git add the dependency code.
Scala binary incompatibility (update to original article)
A number of people pointed out, that having a binary dependency cache is almost as good as having sources.
Well ever came across multiple Scala versions? Or just been in the Scala game for too short to know Scala binary incompatibilities? Yeah, they're fucking awesome if you love that kind of stuff. I don't. I don't want to hunt down all dependencies of Package X that only worked on Scala 2.9 but needed to be recompiled for your 2.10 project? Or 2.11 or whatever?
Happy fun going through that. I wish you lots of fun.
Inline bugfixing (added as well after original publication)
I don't know about you guys, but i like to fix bugs in other people's code that i use. Fills me with pride and makes me happy to see other people benefiting from my code.
So whenever i had to track down issues in Scala/JVM-land, my usual procedure is downloading that libraries sources. Then trying to get that developers build tool to work. Sometimes it's sbt. Sometimes it's ant. Sometimes maven. Sometimes something i haven't even heard of. Awesome, right?
Now i would spend my time getting that stuff to work, then spend my time fixing the bug.
WASTE OF TIME
If i already have the sources, if i already make them compile for my current version, isn't it a lot easier if you can just go to the line, change it, test the code?
Or would you rather go through the whole build process of that maintainer's build tool, place the resulting .jar in your cache or deploy it however, then possibly downloading that again and having to change your build to use the new artifact?
From a simple logic perspective i'd always choose the first, as it saves me a lot of headache and lets me focus on the problem at hand.
Given, this isn't an issue on the JVM to the point where you have a working JRE for your platform. Having a fat-ass JVM running on your RaspberryPI might not be the best use of it's CPU, again, in my Opinion.
How go deals with this? Well there is this excellent Talk from Rob Pike about go compiler internals (slides) explains to us, that since go 1.7 you don't have to use the C barrier anymore, but can have golang just compile straight from Go to ASM. Yup, fucking dank!
So in order to cross-compile some pure go code on OSX for my RaspberryPI, i just run:
GOOS=freebsd GOARCH=arm GOARM=6 go build src/*.go
Yup, that's it. scp that binary and be happy with it. Why not do it on the ARM itself? Well a) it takes prolly a lot longer than on my Intel i7 octo-core, b) golang on ARM is only available up to version 1.4, since there are some issues with newer versions (haven't checked further), but cross-compiling with 1.8-HEAD works just fine.
From my first few months of using it in production i can confirm that for my use-cases (mostly network code), golang is extremely fast, even tho Zero-Copy isn't supported on FreeBSD yet.
Memory consumption is for our applications about 1/10th of the original JVM project, thus reducing memory requirements throughout our datacenter operations, which resulted in about 6/10th of previously used JVM RAM being free'd from our FreeBSD VMs, leaving a LOT of room for new clients/applications of ours.
Golang is going to be my new primary language, putting Scala only in backup-mode for existing clients that need their software supported, which previously got developed by me.
More go related posts to come in 2017!