println - My new Blogging Software

Yesterday i finally finished writing version 0.1 of my blogging software called println, today even 0.2 hit github.

Also i have moved this blog from Nesta CMS to println just now. ;)

I am basically just pasting my README.md here:

println is a blogging / publishing software written in Scala with the Lift Web Framework. It tries to fulfill the need for an un-obustrive approach of writing, publishing and managing ones blog posts with it’s overlay-editor-window.

It is heavily inspired by Nesta CMS a Ruby based CMS Software and also uses parts of its default layout (since i like it for its simplicity).

The official website is currently being done with println itself, please be patient as it will be well-done. There will also be a webcast soon.

Features

  • High Performance (thanks to Lift and Scala)
  • Easy to use (thanks to me)
  • Easy to write (Live-Preview of your Markdown or Plaintext)
  • Easy to customize (by default only layout or css changes are needed)
  • Easy to deploy (.war Archive)
  • Internal web-tracking with MongoDB (optional)
  • Lots of widgets by default (see next section)
  • Atom-Feed (/atom.xml)
  • Google Sitemap (/sitemap.xml)
  • AJAX-/Facebook-Style Tagging with Tag-cloud

Caveats

Currently it is text-only. I will implement media-management as soon as possible. For now i suggest storing your images elsewhere (Flickr, Picasa) as they offer better upload-possibilities from mobile devices anyway.

Another caveat is, that JavaScript is not rendered in the Live-Preview (but properly on the resulting published page). Meaning that if you enter into the Live-Editor, it won’t show in the Live-Preview, but will be perfectly normal on the website.

Layout and Widgets

The main layout is in src/main/webapp/template-hidden/default.html and has all the widgets that are currently implemented. Here is the overview:

  • BitPit: <span class="lift:Helpers.bitpit?id=7019"/>
  • Twitter: <span class="lift:Helpers.twitter?user=fbettag"/>
  • Google Analytics: <span class="lift:Helpers.analytics?ua="/>
  • Tag-Cloud: <span class="lift:Tags.cloud"></span>
  • Copyright Helper: &copy; <span class="lift:Helpers.years?since=2010"></span>

If you want to implement your own, feel free to look at src/main/scala/code/snippets/Helpers.scala for how to do so.

Made with love

It is made with the following pieces of software:

How to get started

In order to get, compile and run the project locally, you need:

After installing PostgreSQL, run the following in your shell:

1
2
$ createuser -Upostgres println
$ createdb -Upostgres --owner println println

If everything is up and running:

1
2
3
4
5
$ git clone git://github.com/fbettag/println.git
$ cd println
$ edit src/main/resources/props/default.props (or production.props)
$ ./sbt update
$ ./sbt ~jetty-run

Visit http://127.0.0.1:8080 with your browser and follow the on-screen instructions.

Without MongoDB

MongoDB is solely used for statistical analysis like Browser-, Referer- or Target-URL-tracking. This is not fully tested yet, but it has interesting results and more flexibility as opposed to other commercial products.

If you want to try it without MongoDB, feel free to do so. Just make sure you unset/comment “mo.host” in your .props-files.

Page title

Instead of writing stupid SQL-Queries to get the default pagetitle, the pagetitle is defined in the properties files (default.props, production.default.props). IMHO this is a performance-saver as well as practiable.. How often do you change your main-site title?

Tracking atom.xml and sitemap.xml

Simply place the following in one or both of the files (not in any of the repeated sections of course):

<lift:Stats.track/>

Remarks

Some of the JavaScript- and Request-Routing-Stuff is very hackish, but it also shows the capabilities Lift has to offer or how you can reuse or abuse them.

Todo

  • AutoScroll/Sticky Editor-Window
  • JavaScript-Evaluation in Live-Preview
  • Media Management -> Image-Upload, etc.
  • Twitter, Facebook and Google+ Auto-Publish

Footnote

Thanks to everybody in the Lift Community and on Liftweb Google Groups.

PowerDNS with Ruby Datamapper

Today i got asked on twitter on how i did my PowerDNS Setup with DataMapper, so i thought “why not make a post out of it?”. Anyway, the code is pretty straight forward.

DNS Zone

In my Setup, i have 2 different Models for DNS Names, one being DNS Zone and the other being the related DNS Domain (rdbms mapping to my upstream domain registries). Today we’ll only handle DNS Zone as DNS Domain would be part of my DNS Robot.

Here is the model for the DNS Zone which is pretty straight forward:

DNS Record

Now this also is not much of a mystery, but i guess you’ll like the generate_serial method which will auto-increment your zones serial accordingly.

pdns.conf

At first, we need to define the DB-connection:

launch=gpgsql
gpgsql-host=127.0.0.1
gpgsql-dbname=pdns
gpgsql-user=pdns
gpgsql-password=mypw

Since my schema is not really compatible with PowerDNS, i’ve created two views to make it compatible:

Final steps

Adjust your pdns.conf to your needs, this is important (at least for me), since i don’t want unauthorized AXFR requests which would give an attacker even more targets to focus on.

Anyway, i hope this helps some of you getting your PowerDNS populated, the GUI part is all for your imagination. ;)

best regards

Bandwidth-tests with my Juniper J2320 Router

The Juniper J2320 is a modular router for enterprises running desktops, servers, VoIP, CRM/ERP/SCM applications. It offers three PIM slots for additional LAN/WAN connectivity and has all the basic licenses for BGP, OSPF and all that fancy stuff included. This is especially nice since an advanced routing license for an EX-series costs more than the whole J-Series.

Routing Configuration

On a J2320, you have to define security zones as well as a policy. If you don’t have your interface assigned to a Zone, the packet will be dropped due to no zone or Nullzone is bound. To prevent this, you can configure your j-Series like this:

root@j2320# set security zones security-zone external interfaces ge-1/0/0
root@j2320# set security zones security-zone external interfaces ge-1/0/1

Also you have to configure a default policy (you can certainly put your own stuff in, this was just for quick testing):

root@j2320# set security policies default-policy permit-all

The show configuration command should show something like this:

root@j2320# show security
zones {
    security-zone internal {
	interfaces {
	    ge-1/0/0;
	}
    }
    security-zone external {
	interfaces {
	    ge-1/0/1;
	}
    }
}
policies {
    default-policy {
	permit-all;
    }
}

Routing Performance

For routing performance, we use iperf again, since it already did a pretty good job on our last test.

# iperf -s

On the other machine i run:

# iperf -c 10.0.2.5
------------------------------------------------------------
Client connecting to 10.0.2.5, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.5 port 34583 connected with 10.0.2.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   511 MBytes   429 Mbits/sec

I also tried this using 64 byte packets again:

# iperf -c 10.0.2.5 -l 64
------------------------------------------------------------
Client connecting to 10.0.2.5, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.5 port 54787 connected with 10.0.2.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   497 MBytes   417 Mbits/sec

417Mbit/s no matter how small your packets are, that’s not good and not bad, it’s average. Not perfect (since it is a 1GE interface), but then again, it was not meant to be a multi-Gbit router.

But i thought there had to be more, so i cramped my brains out and came up with the following..

Routing Performance (other PIC)

Once i noticed the slow performance, i had the idea of plugging one of the uplinks into another PIC of the J2320, therefor maximizing it’s backplane capabilities. The results with 64 byte packets:

# iperf -c 10.0.2.5 -l 64
------------------------------------------------------------
Client connecting to 10.0.2.5, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.5 port 41937 connected with 10.0.2.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   639 MBytes   536 Mbits/sec

Once again disappointing with only 536Mbit/s. But when i started pulling -l 128 again, i was amazed:

# iperf -c 10.0.2.5 -l 128
------------------------------------------------------------
Client connecting to 10.0.2.5, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.5 port 41525 connected with 10.0.2.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.04 GBytes   895 Mbits/sec

895Mbit/s! That’s a value i can show around!

Conclusion

The J-Series router is a very nifty little piece of hardware. If you want firewalling, this will be the right coice. If you want to use it as a Backbone router, you might get into trouble during dDoS Season (School Vacation).

I am currently thinking about selling it as mint-condition (since it has only been used for a few days in the lab). If you’re interested, follow me at my Twitter feed and as soon as i get to put up the Auction, i will post a link there.

Let’s see, maybe if my Hardware sales go up a bit, i might even be able to test some even fancier equipment. I’d so love to see bigger Juniper Routers and their Routing Protocols under a lot of stress. Hope that this time comes soon so i can unleash mausezahn on them. :)

Best regards and thanks for reading

Bandwidth-tests with my new Juniper EX2200-48T

The EX2200 line of Juniper Ethernet switches are ideal for access-layer deployments in branch offices and campus networks, delivers a level of functionality and performance normally associated with higher-cost Ethernet switches. Also i’d say that they are capable of doing Datacenter workload. Anyway, i am planning on using those as Rackswitches over at my Rackhousing business.

Switching Performance

On my Macbook Pro i was running

netcat -l -p 12345 > /dev/null

On the other machine i used dd piping into netcat (2 independent runs):

$ dd if=/dev/zero count=1000 bs=1M | nc 10.0.0.235 12345
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 8.99109 s, 117 MB/s

2nd run:

$ dd if=/dev/zero count=1000 bs=1M | nc 10.0.0.235 12345
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 8.98447 s, 117 MB/s

Clearly you can see that both runs ended with 117MB/s which is full speed in my book. :)

Routing Configuration

Normaly an interface on an EX-Series switch is configured for ethernet-switching:

root@ex2200# show interfaces ge-0/0/0
unit 0 {
	family ethernet-switching;
}

To set another family than ethernet-switching, you need to remove it first!

root@ex2200# delete interfaces ge-0/0/1 unit 0 family ethernet-switching
root@ex2200# delete interfaces ge-0/0/2 unit 0 family ethernet-switching

After that we set our IPs to the desired interfaces:

root@ex2200# set interfaces ge-0/0/1 unit 0 family inet address 10.0.0.1/24
root@ex2200# set interfaces ge-0/0/2 unit 0 family inet address 10.0.1.1/24

The show configuration command should show something like this:

root@ex2200# show interfaces ge-0/0/1
unit 0 {
	family inet {
		address 10.0.0.1/24;
	}
}

root@ex2200# show interfaces ge-0/0/2
unit 0 {
	family inet {
		address 10.0.1.1/24;
	}
}

Routing Performance

For routing performance, i want to be a bit more exact. iperf will do the trick. On the server (in my case Macbook Pro):

# iperf -s

On the other machine i run:

# iperf -c 10.0.1.5
------------------------------------------------------------
Client connecting to 10.0.1.5, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.5 port 41361 connected with 10.0.1.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.09 GBytes   933 Mbits/sec

I also tried this using 64 byte packets:

# iperf -c 10.0.1.5 -l 64
------------------------------------------------------------
Client connecting to 10.0.1.5, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.5 port 58634 connected with 10.0.1.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   619 MBytes   520 Mbits/sec

And also using 128 byte packets:

# iperf -c 10.0.1.5 -l 128
------------------------------------------------------------
Client connecting to 10.0.1.5, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.0.5 port 41361 connected with 10.0.1.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.09 GBytes   933 Mbits/sec

933Mbit/s with normal sized packets is perfect, the only thing it seems to dislike are small packets with 64 bytes in size. You can see that there i only got 520Mbit/s.

Conclusion

The EX-Series switches are a great replacement for any other 1GE switch, the only thing i am sad about is the Advanced Routing License you have to obtain from Juniper in Order to make this device work with protocols like OSPF or BGP.

Since now i have the basic Juniper CLI figured out, i will try some fancy stunts with the J2320 in the next few days.

Stay tuned!

How bad MySQL really is

I’ve got a customer who has an image gallery, sadly they often have problems with performance. Our first big migration was from Apache with mod_php to nginx with FastCGI and MySQL being on a seperate host. This has worked for almost a year now, but now trouble is here again…

You might wonder, “an image gallery? what can go wrong with that?”, well a lot actually. Apart from using the uber-buggy Coppermine Gallery which almost regularly has some really bad security holes, the gallery software features a good amount of badly written SQL statements which bring down MySQL really fast. Due to the lack of developer skills, they also have no DB-abstraction which makes it hard to use with a real database like PostgreSQL.

So why is MySQL so bad?

First let us look at the specs of the DB-Host:

# grep "model name" /proc/cpuinfo
model name	: Intel(R) Xeon(TM) CPU 2.40GHz
model name	: Intel(R) Xeon(TM) CPU 2.40GHz
model name	: Intel(R) Xeon(TM) CPU 2.40GHz
model name	: Intel(R) Xeon(TM) CPU 2.40GHz

# grep MemTotal /proc/meminfo
MemTotal:        4152104 kB

# dmesg | grep -i "direct-access"
[    7.069453] scsi 0:0:0:0: Direct-Access     IBM-ESXS MAP3367NC     FN C101 PQ: 0 ANSI: 3
[    7.099705] scsi 0:0:1:0: Direct-Access     IBM-ESXS DTN036C3UCDY10FN S27M PQ: 0 ANSI: 3

Looks quite ok for a single db host which is supposed to host one lousy database for an image gallery.

Resource usage

The database is running production and has accumulated ~500MB. This is by far not much..

# du -hs /var/lib/mysql/
496M	/var/lib/mysql/

Top shows this:

top - 20:33:30 up 271 days, 21:12,  1 user,  load average: 0.00, 0.02, 0.05
Tasks:  85 total,   1 running,  84 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.6%us,  0.1%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4152104k total,  3379848k used,   772256k free,   333684k buffers
Swap:   498004k total,        0k used,   498004k free,   747136k cached

PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
9646 mysql     20   0 2131m 2.1g 4308 S   15 52.1  22343:23 mysqld

So HOW the fscking hell can a Database eat 52.1% of 4GB RAM to hold a 500MB Database and still suck performance-wise?

I am by far now MySQL-Fan, but this shows me once again that MySQL is not a real database. It’s a toy for people who want to try out PHP without digging too much into the whole RDBMS thing, but that is not working. It makes me very proud to see real projects come out of the ground not being written in PHP or relying on MySQL as their RDBMS, the lazy PHP days are over, at least for me.

But when i look at my past 4 PHP-free years and the whackadoos that usually use that language, i am so proud i jumped ship and started doing real work.

Solution

Instead of trying to optimize the shity database any further, i’ve come up with an arrangement for my customer, where i would write him an Image Gallery Software that works and performs.

It is almost finished and is build using the following pieces of software:

When it’s done, i’ll write a detailed post about it, be sure. ;)

Next to come, Juniper benchmarks. stay tuned!

Hardening ArchLinux against local exploit compilation

A few weeks back, i’ve had the pleasure to pentest a CentOS Server (or was it Fedora?). Anyway, some creepy RPM based distro. What really annoyed me was, that i had real troubles compiling certain exploits for this machine, since i was unable to use gcc, autoconf, make and whatna. I mean ok, i’ve crosscompiled static binaries, but i don’t really wanna alert someones IDS that binary shellcode was just transmitted over the ethernet. That is a no-go. Script kiddies won’t mind, but on the other hand, will script kiddies be able to compile static exploits?

Anyway, it annoyed the shit out of me, therefore i had two options:

  • add another group compiler (gid=15 like $rpm-based-crap)
  • use wheel

I’ve chosen wheel, because all my sudoer users have to be wheel, therefore you’d need an admin account, and guess what, then you’ve got bigger problems than gcc.

ArchLinux

for i in $( pacman -Q -l autoconf automake fakeroot bison flex m4 make patch pkg-config libtool binutils gcc gcc-libs | awk '{print $2}' ); do
	if [ ! -d "$i" ]; then
		echo $i
		chgrp -R wheel $i
		chmod -R o= $i
	fi
done

ArchLinux (multilib)

for i in $( pacman -Q -l autoconf automake fakeroot bison flex m4 make patch pkg-config libtool-multilib binutils-multilib gcc-multilib gcc-libs-multilib | awk '{print $2}' ); do
	if [ ! -d "$i" ]; then
		echo $i
		chgrp -R wheel $i
		chmod -R o= $i
	fi
done

The packages are a bit different depending on what you’ve got.

I am NOT responsible if you break your system or if you get erectile dysfunction.

Enjoy.

High Performance URL-Shortening with Redis-backed nginx

I’ve spent my vacation wisely on writing my new customer interface. While doing that, i’ve played a lot with redis which is one of the most remarkable pieces of software i’ve ever seen. Altho it’s not usable for “important”/“non-temporary” data (since it’s only a key value storage and has limited query capabilities), it’s perfectly made for caching stuff.

One example is my PostgreSQL IP traffic accounting. Works nifty neat, i can get results for any subnets in 0.055 seconds. Since that data only changes once every 15 minutes, i cached it and sliced the query time down to less than 0.017s. The sample query i use covers my whole /20 prefix.

How about some config

To setup the shortening guide, we need the following components:

If you don’t know what git is or how to use it, you shouldn’t try this.. you’ve got bigger things to worry about.

Add those to your nginx’s ./configure and start compiling:

While compiling, we configure and startup redis. These are values you should take a look at:

Please note that appendonly no will disable persistence whatsoever. Your data will be fucked.

The final part is to get nginx to redirect to the redis response. That’s quite easy:

My regexp adds the special websauce which removes redis return-value formed like “$34\r
yourvalue“.

Now write a fancy plugin or whatever to populate your links.

Best regards!

Blog moved to Nesta CMS

Finally! It took me almost 8 hours to completely recode my HTML-output of this blog to Markdown, but now finally the whole site is running Nesta CMS.

Mainly the goal was to speed up the page and having more control over it, which is definately the case when i do things myself. Anyway, works like a charm and now we have also a working Atom Feed.

Nesta is a small CMS/Blog written in Ruby. It is slim (uses Sinatra and static files) but still gives me a lot of options (haml, Textile, Markdown).

You should definately check Nesta out, i am running my company website of Bettag Systems UG (haftungsbeschränkt) for quite a few years. Running stable and without any hassle.

Cheers

Homebrew for OSX

Homebrew is the missing Package Management Software for OSX. It is also a new kid on the Block, since it is an alternative to MacPorts which is IMHO good, but from past decade!

Homebrew consists of a git repository which has “recipes” for how to install certain software. Basically like NetBSD pkgsrc, Gentoo’s portage or Archlinux PKGBUILD. Luckily, these scripts are super-easy ruby scripts which can be commited to Homebrew through Github pull-requests. I myself recently added two packages and another one way back (tho not through a pull-request doh).

For academic cough reasons i updated Metasploit from 3.4 to 3.7.1 and added THC amap. If you look at the ruby script, it’s really super easy. Even Monkeys could do it.

https://github.com/fbettag/homebrew/blob/10808a4d1a4793e7fb961c954f367f629eb54ad5/Library/Formula/amap.rb

So that’s me for tonight, i’m back from my vacation, so expect some more content soon. Also, i am planning on moving this Blog once and for all away from posterous, as the performance seems really bad.