PostgreSQL custom sorting made easy

Every developer knows the pain of sorting database rows by some custom field. The easiest Example of this are DNS Records.

Today i’ve come across a solution using array_position. Basically you can pass it the Order of elements you want on top, and the rest will be below.

It works like this

1
2
SELECT hostname, type, contentn FROM records AS r
ORDER BY type, array_position(array["SOA"::varchar, NS"::varchar], r.type)

Now all records having type SOA will be first, type NS second and type A third. Everything else will come after.

The important part

Make sure you typecast your custom array into the type of the column you are ordering against.
In my case the type column is a varchar, that’s why i am casting all elements to it.

I hope this helps some of you avoiding to write custom sorting in code.

Switching to Jekyll

For almost 8 years i have ran this blog on my own blog engine, println. While at the beginning it gave me all i needed, in recent years my behavior has changed, and thus my preferences. I don’t like editing blog-posts in the browser anymore. I got even more used to vi (neovim specifically) than i was before. So it was time for a change.

After a bit of exporting my old PostgreSQL database for my blog into Jekyll format, i am happy to present the result. I hope i got all the permalinks right, but i guess i will find out.

I really hope that this motivates me to write more frequently in the future again.

cheers

Using Juniper JunOS apply-groups for IXPs (like AMS-IX or DECIX)

So recently i’ve been cleaning out configurations on our network equipment, in order to get rid of technical debt. Two of these missions were simplifying our Switch and Router Configurations. This has been on my todo-list forever, but i hardly ever got to researching it.

The Problem

If you’re either operating JunOS Switches or Routers, you probably have come across a lot of duplicate configuration. Imagine a client (let’s call him “Acme Corp”) has 2 Switchports on one of your EX Series switches configured. Usually this would look something like this:

ge-0/0/0 {
    description "Acme Corp - Server 1 - Port 0";
    unit 0 {
        family ethernet-switching {
            port-mode trunk;
            vlan {
                members public, acme-private;
            }
        }
    }
}
ge-0/0/1 {
    description "Acme Corp - Server 1 - Port 1";
    unit 0 {
        family ethernet-switching {
            port-mode trunk;
            vlan {
                members public, acme-private;
            }
        }
    }
}

There is nothing wrong with that, but this gets you a lot of configuration lines very fast, which makes it a little hard to maintain in my opinion.

Same goes for BGP peers, your configuration for AMS-IX peers will repeat itself over and over again.

group amsix-v4-rs {
    type external;
    description "AMS-IX IPv4 Route Servers";
    local-preference 200;
    import peer-in;
    family inet {
        unicast;
    }
    export peer-out;
    remove-private;
    peer-as 6777;
    neighbor 80.249.208.255;
    neighbor 80.249.209.0;
}
group amsix-v6-rs {
    type external;
    description "AMS-IX IPv6 Route Servers";
    local-preference 200;
    import peer-in;
    family inet6 {
        unicast;
    }
    export peer-out;
    remove-private;
    peer-as 6777;
    neighbor 2001:7f8:1::a500:6777:1 {
        description rs1.ams-ix.net;
    }
    neighbor 2001:7f8:1::a500:6777:2 {
        description rs2.ams-ix.net;
    }
}

Here again, lots of configuration repeating itself (apart from these two being v4 and v6 mixed). But overall, lots of stuff gets repeated for BGP peers over and over again, which makes changes to policies a tedious task, where you have to update every single BGP peer.

How to do it right cleanly then?

I’m guessing (by the fact that you visited this blog post), that apply-groups are a new thing to you, so i’m gonna explain it a bit in a dummy way, probably here and there things that could be better, but this works exceptionally well for me.

How would the Switch config look like, with apply-groups?

First we would set the apply groups:

groups {
    ACME-SERVER {
        interfaces {
            <*> {
                description "Acme Corp Server Interface";
                unit 0 {
                    family ethernet-switching {
                        port-mode trunk;
                        vlan {
                          members public, acme-private;
                        }
                    }
                }
            }
        }
    }
}

then configure the interfaces

interfaces {
   ge-0/0/0 {
        description "Acme Corp - Server 1 - Port 0";
        apply-groups ACME-SERVER;
    }
    ge-0/0/1 {
        description "Acme Corp - Server 1 - Port 1";
        apply-groups ACME-SERVER;
    }
}

This makes it so much easier to tag Switchports for various types of configurations, without having to keep track of all the changes across each interface.

How would a BGP config look like?

Again we set up the apply groups:

groups {
    AMSIX-BGP-v4 {
        protocols {
            bgp {
                group <*> {
                    type external;
                    description "AMS-IX BGP Peer";
                    local-preference 200;
                    import peer-in;
                    family inet {
                        unicast;
                    }
                    export peer-out;
                    remove-private;
                }
            }
        }
    }
}

Now our BGP Peer group section looks like this:

protocols {
    bgp {
        group amsix-v4-rs {
            apply-groups AMSIX-BGP-v4;
            description "AMS-IX IPv4 Route Servers";
            peer-as 6777;
            neighbor 80.249.208.255;
            neighbor 80.249.209.0;
        }
    }
}

What we learned

You now know, how to easily manage templates on JunOS configuration sections. This knowledge also applies to all other configuration areas, as far as i know. It’s not limited to these 2 scenarios, so feel free to play around with it. :)

Thanks for reading

Simplistic Auto Provisioning for BSDs, UNIX and Linux, using just DHCP

For a few weeks now i’ve been thinking about better tools to provision our bare-metal servers and VMs. All tools out there are IMHO bloatware. Over-complicated stuff where nobody knows when the next library upstream will break feature X which will prevent shit from working. Typical wobbly constructs we have these days. I’m not a fan of them, you shouldn’t be either.

But yesterday noon i read one more of these guides to setup something which wants you to curl their installer and pipe it through bash, YIKES.

Then in my typical haze, i decided to play a little mind-game, WHEN would be a moment where this curl | bash scenario would be valid, or at least a bearable solution? Of course! A solution to my previous provisioning dilemma presented itself…

What you need

  • a HTTP Server (nginx, apache, anything that can serve a file)
  • a DHCP Server where you can define custom fields (dnsmasq, kea, isc-dhcpd, ..)
  • a DHCP Client which lets you parse custom fields (dhcpcd, isc-dhcpc. NOT Henning Brauer’s dhclient)

The quick gist

  • DHCP server sends out custom field with URL inside
  • DHCP client picks up that field, processes it in hook with curl | sh

WARNING: THIS IS POTENTIALLY DANGEROUS! THIS IS KEPT SIMPLE FOR THE SAKE OF THIS HOWTO

BETTER APPROACH: gpg sign the script (even when auto-generated) on the server side, and have the client verify the signature against the pubkey.

How to do it

First configure your DHCP Server to deliver a custom field in the reserved range (upwards of 200 i think, but check before you decide). In the Payload we just stick in an URL that can be reached from a DHCP client.

dnsmasq.conf

dhcp-option-force=254,http://192.168.0.1/bootstrap.sh

dhcpd.conf

option server-bootstrap code 254 = string;
subnet 192.168.0.0 netmask 255.255.255.0 {
    [...]
    option server-bootstrap "http://192.168.0.1/bootstrap.sh";
}

Client configuration

Next you need to slightly modify your client’s setup, i’ve only used dhcpcd for this, as FreeBSD’s and OpenBSD’s default dhclient, can’t do custom fields anymore, they all get filtered and there is no configuration for it anymore.

dhcpcd

On FreeBSD, i’ve placed a dhcpcd.enter-hook script at /usr/local/etc/dhcpcd.enter-hook

#!/bin/sh

# for security reasons, you should really check here if bootstrapping is required
# you don't want anyone pushing bad scripts that get executed by a rogue dhcp server
if [ "${new_bootstrap}" != "" ]; then
    TMP=$(mktemp)
    fetch -o ${TMP} ${new_bootstrap}
    # for more security, you might also want to gpg sign your script and have gpg verify it here
    sh ${TMP} || exit 1
fi

Last we need to modify dhcpcd.conf to request the extra field, so it gets delievered by the DHCP Server. I just added those two lines to the default:

define 254 string bootstrap
option bootstrap

bootstrap.sh hosted on the HTTP Server

This is our bootstrapping shell script. This could be anything, there could be many of these for each profile, there could also be a rendering process on the server side, whatever floats your boat. Mine is just a basic sample to get the idea across:

#!/bin/sh

echo
echo
echo "first: do some meaningful diagnosis/inventory here"
echo "  like posting dmidecode and other stuff to your remote"
echo
echo "second: if this is used to bootstrap bare metal machines booting pxe"
echo "  IMPORTANT: check for existing installations on your disk"
echo "             like is there a partitioning scheme already here?"
echo "  then you could go ahead and install whatever you want"
echo
echo "third: enroll this system into configuration management like CFengine"
echo "  like: cf-agent -B your.cf.host && cf-agent -KIC"
echo
echo "sleeping 10 seconds... then just running some wall command"
sleep 10
echo "dhcp-bootstrapping sez HELLO KITTENS!"|wall

The result

Output from running dhcpcd em0 shows that it works :)

DUID 00:01:00:01:21:6f:7f:9e:08:00:27:d7:7f:f9
em0: IAID 27:d7:7f:f9
em0: rebinding lease of 192.168.168.80
em0: leased 192.168.168.80 for 7200 seconds
em0: changing route to 192.168.168.0/24
em0: changing default route via 192.168.168.1
/tmp/tmp.BYYgx9dr                             100% of  691  B 1670 kBps 00m00s


first: do some meaningful diagnosis/inventory here
  like posting dmidecode and other stuff to your remote

second: if this is used to bootstrap bare metal machines booting pxe
  IMPORTANT: check for existing installations on your disk
             like is there a partitioning scheme already here?
  then you could go ahead and install whatever you want

third: enroll this system into configuration management like CFengine
  like: cf-agent -B your.cf.host && cf-agent -KIC

sleeping 10 seconds... then just running some wall command

Broadcast Message from root@test
        (/dev/pts/0) at 15:45 CEST...

dhcp-bootstrapping sez HELLO KITTENS!

forked to background, child pid 34507
root@test:~ #

Final thoughts

These very simple elements thrown together in the right way, make up for a very reliable and especially maintainable setup! No wiggly parts, no extra software you don’t have running anyways. Just plain old Ops-Tech put together the right way. Easy to investigate with tools you already know, easy to customize the heck out of it.

I hope this helps some of you to build better, more reliable and easier to maintain systems.

Golang is really awesome and why it beats Scala/JVM

So i learned Golang a few months back. Thanks to @normanmaurer and @MegOnWheels for the great suggestion! Not because i wanted to, but because Scala and the JVM started to suck after almost a decade.

Why did the JVM start to suck?

When i started using the JVM, i was happy that my application and it’s virtual machine/runtime would be separate parts. After 9 Years of coding nearly full-time Scala, i’ve come to hate it. Why?

Because the variance in the JVM makes it extremely hard to make predictable applications. One Version does this, the next breaks that, so from a quality coder perspective, you have to work around your runtimes issues and capabilities.

Next up in order to use the latest features like TLS SNI (which isn’t really cutting edge in the wake of TLS 1.3), you need to keep your JVM/Runtime up to date, everywhere you want to run that feature. (TLS SNI was Java7->8)

If you’re a coder with no Ops-responsibilities, this might seem acceptable to you, but i have to care about operating the code that i write, just as much as i have to care about the code itself!

So what makes golang (imho) superior?

You get a statically linked binary. No Runtime, no nothing installed.

This is especially awesome from a deployment standpoint, as you only need to take care of your binary and it’s assets (if any).

Also noteworthy, since my Scala/Java .jars (with all dependencies bundled) were rarely less than 60MB, on top of a 500MB+ JVM, that makes for a lot of wasted disk-space and things that need regular updating. My golang binaries have rarely more than 13MB, all together.

Last but not least, scala-sbt sucks donkey balls. Straight up. In my opinion, it is the single worst build tool EVER conceived by a human! Regularly breaking backward compatibility, requiring me to deal with new plugins and shit, HORRIBLE!

###I want a build tool that just builds my code and churns out a usable binary form.

Which is what the ‘go’ tool actually does. Apart from it’s super feature-richness like testing, fuzzing and all that nice stuff, it also builds code reliable and without much of a config file that i need to keep in shape! A stupid simple Makefile suffices for all my needs so far.

Also, when i needed Disk-Space previously on Scala/JVM, rm -rf ~/.ivy2 solved most of this, since all your dependency jars pulled from sbt live there. But once you do that, maybe you should look for another career, since it’s likely that some artifacts/jars might not be available anymore, breaking your build. As opposed to Golang, where i just git clone my dependency-source into my repository, add it either as submodule to git or just straight up git add the dependency code.

Scala binary incompatibility (update to original article)

A number of people pointed out, that having a binary dependency cache is almost as good as having sources.

Well ever came across multiple Scala versions? Or just been in the Scala game for too short to know Scala binary incompatibilities? Yeah, they’re fucking awesome if you love that kind of stuff. I don’t. I don’t want to hunt down all dependencies of Package X that only worked on Scala 2.9 but needed to be recompiled for your 2.10 project? Or 2.11 or whatever?

Happy fun going through that. I wish you lots of fun.

Inline bugfixing (added as well after original publication)

I don’t know about you guys, but i like to fix bugs in other people’s code that i use. Fills me with pride and makes me happy to see other people benefiting from my code.

So whenever i had to track down issues in Scala/JVM-land, my usual procedure is downloading that libraries sources. Then trying to get that developers build tool to work. Sometimes it’s sbt. Sometimes it’s ant. Sometimes maven. Sometimes something i haven’t even heard of. Awesome, right?

Now i would spend my time getting that stuff to work, then spend my time fixing the bug.

WASTE OF TIME

If i already have the sources, if i already make them compile for my current version, isn’t it a lot easier if you can just go to the line, change it, test the code?

Or would you rather go through the whole build process of that maintainer’s build tool, place the resulting .jar in your cache or deploy it however, then possibly downloading that again and having to change your build to use the new artifact?

From a simple logic perspective i’d always choose the first, as it saves me a lot of headache and lets me focus on the problem at hand.

Cross compilation

Given, this isn’t an issue on the JVM to the point where you have a working JRE for your platform. Having a fat-ass JVM running on your RaspberryPI might not be the best use of it’s CPU, again, in my Opinion.

How go deals with this? Well there is this excellent Talk from Rob Pike about go compiler internals (slides) explains to us, that since go 1.7 you don’t have to use the C barrier anymore, but can have golang just compile straight from Go to ASM. Yup, fucking dank!

So in order to cross-compile some pure go code on OSX for my RaspberryPI, i just run:

GOOS=freebsd GOARCH=arm GOARM=6 go build src/*.go

Yup, that’s it. scp that binary and be happy with it. Why not do it on the ARM itself? Well a) it takes prolly a lot longer than on my Intel i7 octo-core, b) golang on ARM is only available up to version 1.4, since there are some issues with newer versions (haven’t checked further), but cross-compiling with 1.8-HEAD works just fine.

Performance

From my first few months of using it in production i can confirm that for my use-cases (mostly network code), golang is extremely fast, even tho Zero-Copy isn’t supported on FreeBSD yet.

Memory consumption is for our applications about 1/10th of the original JVM project, thus reducing memory requirements throughout our datacenter operations, which resulted in about 6/10th of previously used JVM RAM being free’d from our FreeBSD VMs, leaving a LOT of room for new clients/applications of ours.

Conclusion

Golang is going to be my new primary language, putting Scala only in backup-mode for existing clients that need their software supported, which previously got developed by me.

More go related posts to come in 2017!

Kali on the RaspberryPi with 3.5" LCD

So i have acquired myself a cheap Chinese 3.5” LCD display with Resistive Touch from aliexpress. So far so good, but it took me nearly a month to get a working current setup.

The Problem

The chinese vendor i got it from refers to a site called waveshare.com, which is so badly connected it never loaded here. So i google-cached the Site, found a file-name LCD-show.tar.gz, which of course also didn’t load. So i set out to find the file, did so, and was baffled.

The Manufacturer provides only Linux Kernel 3.18 binary modules, no sources!

So i started checking what modules they loaded, and came onto notro’s rpi firmware. Mildly out of date, but at least there is an issue that has to do with my display and people not getting it to work, since 2014!!

After reflashing that RPI’s disk for the 40th time after soft-bricking the installation with an out-of-date RPI firmware and outdated kernel modules that panic’d the thing on boot, i found the solution.

How do i get it to work?

Well it’s fairly easy, after reading a bunch of code and googling for yet another file, i stumbled upon swkim01’s waveshare-dtoverlays GitHub repo, which makes the whole process as easy as copying the dtoverlay file into /boot/overlays/, add one line to /boot/config.txt, reboot and be done with it..

The Process

clonelink
1
cp waveshare-dtoverlays/waveshare3(2b|5a)-overlay.dtb /boot/overlays/

Then adding the following to /boot/config.txt (depending on your display and needs):

3.2” LCD’s /boot/config.txt with 270° rotation

dtoverlay=waveshare32b:rotate=270

3.5” LCD’s /boot/config.txt with 90° rotation and having XY of touch swapped

dtoverlay=waveshare35a:rotate=90,swapxy=1

Reboot

After rebooting, my display lit up in black (if driver is not loaded it is only white) but didn’t do much else. Adding the following lines to /usr/share/X11/xorg.conf.d/99-fbdev.conf (or create that file if you don’t already have it from failed attempts):

  Identifier "myfb"
  Driver "fbdev"
  Option "fbdev" "/dev/fb1"
EndSection```

Then running **FRAMEBUFFER=/dev/fb1 startx** made it launch into X for the first time. YAY


# Calibration

After i had it running, i noticed that my mouse didn't appear where it should be, so if i touched (even with the pen that came with it), the position of the Event was off, so i figured out this approach to get it working for my 270° rotated setup.

You just have to put the driver's information into Xorg config. I put this part into /usr/share/X11/xorg.conf.d/99-calibration.conf:

```Section "InputClass"
      Identifier "calibration"
      MatchProduct "ADS7846 Touchscreen"
      Option "Calibration" "3869 178 3903 270"
EndSection```

**If these do not work for you, install xinput-calibrator (apt-get install worked), and run it while having X open. It will yield at the end a configuration for you to put into the 99-calibration.conf.**



# Ideas for the future

I want to build a little Raspberry PI powered WiFi-Attack-Station. Basically plug it into a power-bank or into a wall-socket, wait for the GUI to appear and then either select a WiFi which it shall attack,  or have it auto-attack everything around it. Currently writing a wrapper-script for aircrack-ng's cli tools that wraps the needed steps, after that i'll dig into GUI stuff, which i've never coded this way before. (Only VisualBasic back around 2000)


# Happy hacking yourself!

Merry Christmas and Happy 2017 (maybe not for you US citizens)

How would an Apple Car look like, from what we have today

So just today we all got the “surprising” news, that Apple is really interested in autonomous cars. We all knew it for months, but hey, let’s play along for a second. Further more, after the groundbreaking innovations we’ve just witnessed with the new iPhone and MacBook. Yes, leaving out the function bar, replacing it with an ARM based iWatch, that’s what’s really been missing from MacBooks. It’s not easy to switch batteries, nor the fact that your RAM is soldered into the fuckers. Nope, you really really needed a fucking iWatch in your MBP. Not to forget the smart move of replacing the iPhones Audio Jack, so you need even more fucking adapters and cables! Genius!

So let’s assume for a second, Apple was to build a car

#1 Charging

In order to drive an all electric autonomous vehicle, you need to charge it somehow:
So my Apple-dream-car would optimally not work with any other charging cable out there. I really would require it to have a new plug design. Maybe something that can wire Audio/Video into my Garage-Door, for no good reason yet. (they’ll think of something for iGarage 2.0 - but keep reading)

#2 Seatbelts

Well either Apple is gonna strangle every country with some fancy law-firm, so seatbelts will be obsolete in the future, ooor, i think, you’re just going to have to buy seatbelts made for your body-size.
Because Apple is likely bold enough to remove built-in seat-belts, and have you choose between XS, S, M, L, XL, XXL seatbelts in the Apple Store.

#3 Babyseats

Of course Apple-Cars only fit Apple-certified Babyseats! Also you need special Apple tools (screwdrivers, etc) to install your Babyseat. Or you just drive by one of the Apple-Stores, make an appointment with a Genius, and he’ll do it for you!

#4 Wheels

It would not be an Apple car, if they didn’t redesign the tires! A true apple car doesn’t have round rubber tires! No, ideally it has like Tires (an essential part) made out of little refurbished MacBook Pros. No rubber, all Aluminum! Brakes are applied through signals on an i2c bus!

#5 Garage

Also, since your Apple Car was so boldly redesigned, that it has the shape of a triangle, you’ll need the special iGarage from Apple. It doesn’t come with any cables required to operate it, but once you’ve spent the extra 10kUSD on the cables and motors to open the Garage, then you’ll be set to go. Be sure to only buy Apple certified Garage-Motors!

Deathtrap or Moneysuck?

Well depends, if you have enough money to get your car running for the first time, then you’ll still have to spend a few thousand USD to make it street legal. Like it would need lights, but you didn’t get those, because the iCar 1.0 was so boldly redesigned, they even left off the roof! It rarely rains in California, you know?

On the other hand, since Apple’s QA seems to have died with Steve Jobs, i wouldn’t count on them having tested anything. You’ll likely one morning step into your car to find it dead, because some OTA Software Update bricked it.

Final thoughts

Yeah, better get a Tesla.

The FreeBSD World

FreeBSD is my favorite OS, not just recently but all the way back to 2003 when Linux didn’t cut it anymore for me. Given that Linux has more device drivers available, it also means more bugs, code duplication and generally speaking bad coding habits.

FreeBSD on the other hand (like the other BSDs), value quality and stability, which isn’t just am empty Hull. It has kqueue(2) which is far superior than epoll. It has ZFS builtin with Solaris guys, whom always worked on ZFS since it’s inception, commit to.

It handles Xen as DomU and Dom0, has it’s own Hypervisor BHyve, and offers failover capabilities like pacemaker or keepalived through carp(4) natively.

It offers a straight up Makefile to build the entire Toolchain and OS, natively offering a nice RAM disk solution named nanoBSD and more extensive 3rd party solution mfsBSD.

Ports are available more than you could build or install, i already added 2 new ports this year (oscam and pixiewps) as well as updated one existing (tvheadend).

It also has a really detailed Handbook, which will get you started in no time. If you come from Linux, you’ll find new appreciation for tools like man, since the content provided from the OS really now is worth reading.

Why we at anycast.io switched completely over

With Linux it has become a gamble between Kernel-Updates and Security Issues arising from pre-packaged Software. On FreeBSD we are able to easily use tools like poudriere to compile 3rd party software with features we want, packaged and signed, ready to be installed.

We now use mfsBSD for most of our physical systems, thus reducing the risk of persistent malware/rootkits, after reboots. This also helps us reduce problems with logging in through sshd, after an OS disk fails, since we do not need an OS Disk anymore. :)

In case you wonder: sure, we persist some host-specific configuration on the machine’s ZFS raidz pool, which then get’s loaded on boot.

This makes upgrade procedures easy, especially in an Environment like ours, where clients bring their own hardware.

If i had only one or three kinds of servers in the Racks, than this would be a lot easier, but i can’t make all our clients buy the same hardware for totally different needs, that wouldn’t be right.

So let’s take Rack B for example, where we have 20 different servers, 20 different ways of how an OS Upgrade could fail. If you had to work with rolling back or restoring from backups, this would be a tedious task. Since we are able to PXE Boot (almost) all of our physical hosts from mfsBSD, we just try to boot the new PXE image and see what happens. If it does work, great! If it doesn’t, we just reboot to the old image.

Why can’t you do that with Linux?

You can, we already did this with AlpineLinux, but it’s tedious to keep track of all the new additions in Linux. AlpineLinux has done a great job in the past, and we will continue to run some of our Xen Dom0s on it (since it integrates with GrSec perfectly), but we will certainly dip into all the benefits FreeBSD has to offer.

Linux Distros seem to be “bleeding edge” compared to FreeBSD. You have all these fancy things like systemd and whatnot, but who wants systems to change rapidly if it’s not needed? Nobody, except IT Hipsters i guess.

I’m happy with using the same rc.d system that has booted FreeBSD for 2 decades (with adjustments of course), but this also means, your stuff is so much more likely to just work with the new version.

That’s what i call stability.

So i’ll be posting a lot more BSD-related things in the future, as i’m looking forward to new challenges with FreeBSD which just make me smirk towards Linux folks. ;)

Franz

The #Kali #NetHunter 2.0 on #Nexus4 (Mako) HowTo

Kali Linux is a Penetration-Testing Distro basted on Linux. Most people reading my blog may already know that, but what you might have overheard when you’re not owning an Android-device, there is something called Kali NetHunter Linux, and it already got release in Version 2.0.

Since i ported the first half-way working version of Android to my HTC BlueAngel back in the days, these things have been on my list of “have a closer look at”. Now with a Pentest-Distro on there, that makes a lot more fun.

NetHunter provides you with almost everything you can imagine on your Android based devices. Even tho password cracking makes much more sense on another machine, capturing handshakes and sending them off for remote cracking isn’t that hard to accomplish.

Now before you jump to the download-page and ruin your nicely installed Android phone, here are a few pointers to get you on the right way.

MultiROM

If you’re not using MultiROM to fiddle with your Android ROMs, then you’re doing it wrong. (I’m not aware of alternatives, if so, please comment). Anyway, it lets you have multiple Android ROMs on the same phone, without having to overwrite your nicely configured CyanogenMod or Paranoid Android.

With that installed, you just boot to Rescue (provided you have a rooted device), tap Advanced -> MultiROM -> Add ROM -> Next -> ZIP file -> and Install whatever Android ROM you want installed.

After that, whenever you phone boots, it will present you with a list of ROMs of which to choose from, or it boots your default within 5 seconds.

You can get MultiROM from the Google Play store.

But hold off on booting into Recovery until you’ve completed the next 2 steps.

Which ROM to choose?

Well you need a base-ROM for Kali, since it only contains the Kali Launcher App which will then download everything for you.

I went ahead with AOSP / Android Open Source Project and Downloaded a ROM from nxrom.us, which already has 5.1.1 without Google Apps for my Nexus 4.

Just download the ZIP and copy it to your Phone’s SD-Card.

Where to get the NetHunter 2.0 Mako Image?

As you might have noticed, it’s listed in the “supported devices” Section over at kali.org, but you won’t find a Download link anywhere. Now you could either go ahead and compile everything from source as described in the Kali NetHunter GitHub Repo, or you could just use the shortcut binkybear on Freenode gave me, which worked nicely for me.

git clone https://github.com/offensive-security/kali-nethunter.git -b newinstaller-fj
cd AnyKernel2
python build.py -f
python build.py -d mako -l

This works on OSX as well as on Linux. You’ll get your Kali Update ZIP File which you’ll also place on your device’s SD-card.

Installation

Now when both (Android ROM + Kali) are copied to your SD-Card, you can proceed into booting into MultiBoot recovery.

Once there, tap Advanced -> MultiROM -> Add ROM -> Next -> ZIP file -> select the Android Image downloaded from nxrom.us (Android5.1.1_NX..zip).

Once that is successful, tap Back in lower Right corner, not Reboot System -> Back again -> List ROMs -> select the newly installed ROM -> Flash ZIP -> select update-nethunter-mako-…zip

Now NetHunter uses the AROMA installer, which is very nice as you’ll notice. Just tap through the quick GUI Installer and select all of the Apps, once done the installation will go back to MultiROM.

Once finished, reboot into your new system.

Great, what now?

Open the NetHunter App and install the Kali chroot, which will give you all the tools in a self-contained chroot. This might take 30 minutes to complete.

From there it’s up to you what to do. ;)

Special thanks goes out to binkybear on Freenode

Deploying #Liftweb .war files on #Jetty in 2015

If you’re running Liftweb Applications in production, you might either be using Tomcat or Jetty. If you’re using Jetty, you’ve probably stumbled across David Pollak’s jetty.tgz (i don’t find the link anymore).

While it is a nice collection of scripts and files (having ‘8G’ in file ramsize, ‘8080’ in file baseport, etc), but keeping this (even small) overlay of files in sync with an updated jetty dist tarball can be tricky.

So what does my solution differently?

Instead of having 6-7 different files (baseport, ramsize, start_prod.sh, start_pilot.sh, stop_prod.sh, stop_pilot.sh, ..) i moved them all into one shell-script which you can basically just place in your freshly extracted jetty-distribution tarball and go for it.

It does not copy your package’d war file or anything, it’s basically just an rc-script for jetty.