Java Date performance subtleties

A recent profling session pointed out that some of our processing threads were blocking on java.util.Date construction. This is troubling, because it’s something we do many thousands of times per second, and blocked threads are pretty bad!

A bit of digging led me to TimeZone.getDefault(). This, for some insanely fucked up reason, makes a synchronized call to TimeZone.getDefaultInAppContext(). The call is synchronized because it attempts to load the default time zone from the sun.awt.AppContext. What. The. Fuck. I don’t know what people were smoking when they wrote this, but I hope they enjoyed it …

Unfortunately, Date doesn’t have a constructor which takes a TimeZone argument, so it always calls getDefault() instead.

I decided to run some microbenchmarks. I benchmarked four different ways of creating Dates:

// date-short:
    new Date();

//date-long: 
    new Date(year, month, date, hrs, min, sec);

// calendar:
    Calendar cal = Calendar.getInstance(TimeZone);
    cal.set(year, month, date, hourOfDay, minute, second)
    cal.getTime();

// cached-cleared-calendar:
//    Same as calendar, but with Calendar.getInstance() outside of the loop, 
//    and a cal.clear() call in the loop.

I tested single threaded performance, where 1M Dates were created using each method in a single thread. Then multi-threaded with 4 threads, each thread creating 250k Dates. In other words: both methods ended up creating the same number of Dates.

Performance graph, indicatin dramatic differences

Lower is beter.

With exception of date-long, all methods speed up by a factor of 2 when multi-threaded. (The machine only has 2 physical cores). The date-long method actually slows down when multi-threaded. This is because of lock contention in the synchronized TimeZone acquisition.

The JavaDoc for Date suggests replacing the date-long call by a calendar call. Performance-wise, this is not a very good suggestion: its single-threaded performance is twice as bad as that of Date unless you reuse the same Calendar instance. Even multi-threaded it’s outperformed by date-long. This is simply not acceptable.

Fortunately, the cached-cleared-calendar option performs very well. You could easily store a ThreadLocal reference to an instance of a Calendar and clear it whenever you need to use it.

More important than the raw duration of the Date creation, is the synchronization overhead. Every time a thread has to wait to enter a synchronized block, it could end up being rescheduled or swapped out. This reduces the predictability of performance. Keeping synchronization down to a minimum (or zero, in this case) increases predictability and liveness of the application in general.

Before anyone mentions it: yes, I’m aware that the long Date constructors are deprecated. Unfortunately, they are what Joda uses when converting to Java Dates. I hope that the -kind?- folks at Oracle will reconsider their shoddy implementation.

I’d be much happier if java.util.Date would stop sucking quite as much. And if SimpleDateFormat were made thread-safe. And … the list goes on.


The I in IM

Instant messaging sucks. The “instant” part in particular. Being interrupted while trying to churn out code, sucks and reduces my productivity. When I’ve got my headphones on, it’s a sign not to interrupt me unless the building is on fire. When you’ve got your favourite IM client’s status set to BUSY/FUCK OFF/DND, people seem to ignore it and bombard you with piles of senseless drivel anyway. So I’ve stopped using any kind of instant messaging. The fact that all of these IM clients have megabytes upon megabytes of useless crap in them and generally misbehave on your network, is further ammunition against them. Skype is particularly bad. It even binds to port 80 for some entirely fucked up reason.

IRC, on the other hand, is amazing, and has been for over 20 years. It Just Works™. There are many clients for every single platform out there. It’s nicely standardized, so everyone can talk to everyone. It doesn’t make noise. It doesn’t get in your face. You can have group conversations, individual conversations, encrypted conversations without relying on a 100mb large untrusted binary from EvilCorp™.

Running irssi in a screen (or tmux, if you’re so inclined) on a VPS somewhere ensures that you’re always online and can look back and read conversations that you may have missed. And all without wanting to throw anyone off a cliff. Yay. Thank you, IRC.

Now if only more people would start using IRC again, then the world would be perfect.

And while I’m ranting .. what’s wrong with e-mail? Facebook messages are probably about as annoying as IM clients. One of these days I’m going to start slapping people with trouts for not using e-mail.


Gnuplot data analysis: a real world example

Creating graphs in LibreOffice is a nightmare. They’re ugly, nearly impossible to customize and creating pivot tables with data is bloody tedious work. In this post, I’ll show you how I took the output of a couple of performance test scripts and turned it into reasonably pretty graphs with a few standard command line tools (gnuplot, awk, a bit of (ba)sh and a Makefile).

The Data

I ran a series of query performance tests against data sets of different sizes. The sets contain 10k, 100k, 1M, 10M, 100M and 500M documents. One of the basic constraints is that it has to be easy to add/remove sets. I don’t want to faff about with deleting columns or updating pivot tables. If I add a set to my test data, I want it automagically show up in my graphs.

The output of the test script is a simple tab separated file, and looks like this:

#Set	Iteration	QueryID	Duration
500M	1	101	10.497499465942383
500M	1	102	3.9973576068878174
500M	1	103	9.4201889038085938
500M	1	104	2.8091645240783691
500M	1	105	2.944718599319458
500M	1	106	5.1576917171478271
500M	1	107	5.7224125862121582
500M	1	108	5.7259769439697266
500M	1	109	4.7974696159362793

Each row contains the query duration (in seconds) for a single execution of a single query.

Processing the data

I don’t just want to graph random numbers. Instead, for each query in each set, I want the shortest execution time (MIN), the longest (MAX) and the average across iterations (AVG). So we’ll create a little awk script to output data in this format. In order to make life easier for gnuplot later on, we’ll create a file per dataset.

% head -n 3 output/500M.dat

#SET	QUERY	MIN	MAX	AVG	ITERATIONS
500M	200	0.071	2.699	0.952	3
500M	110	0.082	5.279	1.819	3

Here’s the source of the awk script, transform.awk. The code is quite verbose, to make it a bit easier to understand.

BEGIN {
}
 
{
        if($0 ~ /^[^#]/) {
                key = $1"_"$3
                first = iterations[key] > 0 ? 0 : 1
                sets[$1] = 1
                queries[$3] = 1
                totals[key] += $4
                iterations[key] += 1
 
                if(1 == iterations[key]) {
                        minima[key] = $4
                        maxima[key] = $4
                } else {
                        minima[key] = $4 < minima[key] ? $4 : minima[key]
                        maxima[key] = $4 > maxima[key] ? $4 : maxima[key]
                }
        }
}
 
END {
 
        for(set in sets) {
                outfile = "output/"set".dat"
                print "#SET\tQUERY\tMIN\tMAX\tAVG\tITERATIONS" > outfile
                for(query in queries) {
                        key = set"_"query
                        iterationCount = iterations[key]
                        average = totals[key] / iterationCount
                        printf("%s\t%d\t%.3f\t%.3f\t%.3f\t%d\n", set, query, minima[key], maxima[key], average, iterationCount) >> outfile
 
                }
        }
}

This code will read our input data, calculate MIN, MAX, AVG, number of iterations for each query and dump the contents in a tab-separated dat file with the same name as the set. Again, this is done to make life easier for gnuplot later on.

I want to see the effect of dataset size on query performance, so I want to plot averages for each set. Gnuplot makes this nice and easy, all I have to do is name my sets and tell it where to find the data. But ah … I don’t want to tell gnuplot what my sets are, because they should be determined dynamically from the available data. Enter, a wee shellscript that outputs gnuplot commands.

#!/bin/sh
 
# Output plot commands for all data sets in the output dir
# Usage: ./plotgenerator.sh column-number
# Example for the AVG column: ./plotgenerator.sh 5
 
prefix=""
 
echo -n "plot "
for s in `ls output | sed 's/\.dat//'` ;
do
        echo -n "$prefix \"output/$s.dat\" using 2:$1 title \"$s\""
 
        if [[ "$prefix" == "" ]] ; then
                prefix=", "
        fi
done

This script will generate a gnuplot “plot” command. Each datafile gets its own title (this is why we named our data files after their dataset name) and its own colour in the graph. We want to plot two columns: the QueryID, and the AVG duration. In order to make it easier to plot the MIN or MAX columns, I’m parameterizing the second column: the $1 value is the number of the AVG, MIN or MAX column.

Plotting

Gnuplot will call the plotgenerator.sh script at runtime. All that’s left to do is write a few lines of gnuplot!

Here’s the source of average.gnp

#!/usr/bin/gnuplot
reset
set terminal png enhanced size 1280,768
 
set xlabel "Query"
set ylabel "Duration (seconds)"
set xrange [100:]
 
set title "Average query duration"
set key outside
set grid
 
set style data points
 
eval(system("./plotgenerator.sh 5"))

The result

% ./average.gnp > average.png

Click for full size.

average

Wrapping it up with a Makefile

I don’t like having to remember which steps to execute in which order, and instead of faffing about with yet another shell script, I’ll throw in another *nix favourite: a Makefile.

It looks like this:

average:
        rm -rf output
        mkdir output
        awk -f transform.awk queries.dat
        ./average.gnp > average.png

Now all you have to do, is run make whenever you’ve updated your data file, and you’ll end up with a nice’n purdy new graph. Yay!

Having a bit of command line proficiency goes a long way. It’s so much easier and faster to analyse, transform and plot data this way than it is using graphical “tools”. Not to mention that you can easily integrate this with your build system…that way, each new build can ship with up-to-date performance graphs. Just sayin'!

Note: I’m aware that a lot of this scripting could be eliminated in gnuplot 4.6, but it doesn’t ship with Fedora yet, and I couldn’t be arsed building it.


On Charsets, Timezones, and Hashes

Character sets, time zones and password hashes are pretty much the bane of my life. Whenever something breaks in a particularly spectacular fashion, you can be sure that one of those three is, in some way, responsible. Or DNS, of course. Apparently the software developers Just Don’t Get It™. Granted, they are pretty complex topics. I’m not expecting anyone to care about the difference between ISO-8859-15 and ISO-8859-1, know about UTC’s subtleties or be able to implement SHA-1 using a ball of twine.

What I do expect, is for sensible folk to follow these very simple guidelines. They will make your (and everyone else’s) life substantially easier.

Use UTF-8..

Always. No exceptions. Configure your text editors to default to UTF-8. Make sure everyone on your team does the same. And while you’re at it, configure the editor to use UNIX-style line-endings (newline, without useless carriage returns).

..or not

Make sure you document the cases where you can’t use UTF-8. Write down and remember which encoding you are using, and why. Remember that iconv is your friend.

Store dates with time zone information

Always. A date/time is entirely meaningless unless you know which time zone it’s in. Store the time zone. If you’re using some kind of retarded age-old RDBMS which doesn’t support date/time fields with TZ data, then you can either store your dates as a string, or store the TZ in an extra column. I repeat: a date is meaningless without a time zone.

While I’m on the subject: store dates in a format described by ISO 8601, ending with a Z to designate UTC (Zulu). No fancy pansy nonsense with the first 3 letters of the English name of the month. All you need is ISO 8601.

Bonus tip: always store dates in UTC. Make the conversion to the user time zone only when presenting times to a user.

Don’t rely on platform defaults

You want your code to be cross-platform, right? So don’t rely on platform defaults. Be explicit about which time zone/encoding/language/.. you’re using or expecting.

Use bcrypt

Don’t try to roll your own password hashing mechanism. It’ll suck and it’ll be broken beyond repair. Instead, use bcrypt or PBKDF2. They’re designed to be slow, which will make brute-force attacks less likely to be successful. Implementations are available for most sensible programming environments.

If you have some kind of roll-your-own fetish, then at least use an HMAC.

Problem be gone

Keeping these simple guidelines in mind will prevent entire ranges of bugs from being introduced into your code base. Total cost of implementation: zilch. Benefit: fewer headdesk incidents.


On bug reports

There’s an old joke about a Manager, an Engineer and a Software Developer: they’re in a car and the brakes fail as they go down a mountain road. They miraculously come to a standstill, but now they’re stuck, and Somebody Should Do Something ™. The Manager suggests they make a Plan, define Goals & Measurable Objectives in order to solve the Critical Problem. The Engineer has his tools with him, so he suggests he take look at the problem and fix it. The Software Developer, of course, is not convinced and suggests they push the car up hill to see if the problem will manifest itself again …

You can probably find a bunch of different versions of this on the web, but the punch line is always the same: the developer wants to be able to reproduce the bug. Not just to verify its existence (although it wouldn’t be the first time a user reported a critical data loss bug after having pressed the delete button..), but also to have a place to start the bug hunt. Software systems are complex. We have enough layers of abstraction built on top of a bunch of transistors to make Shrek jealous. Something as seemingly simple as displaying a bit of text on a screen is so complex that it can no longer be fully understood by a single person.

Being able to reproduce a bug is the only way to resolve ambiguity. When it isn’t possible to describe all the steps that led to the bug, then a thorough description of the problem is the next best thing. Think screenshots, explanations of the expected result, and answers to all of the usual “Which”-questions (Version, OS, Environment, Lunar Phase, ..).

Debugging is hard (and fun). Vague bug reports like “the application is broken” rather make me want to tie a team of horses to your limbs and decapitate your bloody remains educate you on the virtues of well-written bug reports.

Now .. let me see if I can fix this “broken” application .. sigh.


Java 7 performance

I decided to compare Java 6 & 7 performance for $employer’s $application. Java 7 performs better — as expected. What I did not expect, was that the difference would be so big. Around 10% on average. That’s not bad for something as simple as a version bump.

Jave 6 vs Java 7

Ideally I’d like to investigate where this difference comes from.

$application uses Apache Solr rather extensively. In fact, most of the time querying is spent in Solr. With indexing it’s probably about 50% of the time. With querying it’s probably closer to 90%. All tests are run in a controlled environment, so I have a fair amount of confidence in these results.

The indexing test inserts 3 million documents in Solr. Creating these documents takes up the bulk of the time. It involves a lot of filesystem access – something which Java versions have very little influence over –, and heavily multi-threaded CPU-intensive processing.

If you’re not using Java 7, you really should consider upgrading. If you’re stuck with people who live in the past, maybe you can convince them with a bunch of pretty performance graphs of your own.


What bugs me on the web

2013 is nearly upon us, and the web has come a very long way in the 17+ years I’ve been a netizen. And yet, even though we’ve made so many advances, it sometimes feels like we’ve been stagnant, or worse, regressed in some cases.

Each and every web developer out there should have a long, hard think about how the web has (d)evolved in their lifetime and which way we want to head next. There’s an awful lot happening at the moment: web 2.0, HTML 5, Flash’s death-throes, super-mega-ultra tracking cookies, EU cookie regulation nonsense, microdata, cloud fun, … I could go on all day. Needless to say: it’s a mixed bunch.

In any event, here’s a brief list of 3 things that bug me on the web.

Usability has long been the web’s sore thumb, and in spite of any number of government-sponsored usability certification programmes over the year, people still don’t seem to give a rat’s arse. Websites are still riddled with nasty drop down menus that only work with a mouse. Sometimes they’re extra nasty by virtue of being ajaxified. At least Flash menus are finally going the way of the dinosaur.

Pro tip: every single bloody link on your web site should have a working HREF, so people can use it without relying on click handlers, mice, javascript and so people can open the bloody thing in a new tab without going through hell and back.

Bonus points: make your links point to human-readable URLs.

Languages, you’re doing it wrong

The web is no longer an English-only or US-only playing field, and companies all over are starting to cotton on to this fact. What they have yet to realise, however, is that people don’t necessarily speak the language you think they do. If you rely on geolocation data to serve up translated content: stop. You’re doing it wrong. The user determines the language. Believe it or not, people do know which language(s) they speak.

Geolocation, for starters, isn’t an exact science. Depending on the kind of device this can indeed be very accurate. Or very much not. Proxies, VPNs, Onion Routers etc can obviously mislead your tracking. Geolocation tells you nothing. It doesn’t tell you why that person is there (maybe they’re on holiday?). It also doesn’t tell you what language is spoken there. This might be a shock to some people, but some countries have more than one official language. Hell, some villages do. Maybe you can find this data somewhere, and correlate it with the location, but you’d be wrong to. Language is a very sensitive issue in some places. Get it right, or pick a sensible default and make clear that it was a guess. Don’t be afraid to ask for user input.

Pro tip: My favourite HTTP header: Accept-Language. Every sensible browser sends this header with every request. In most cases, the default is the browser’s or OS’s language. Which is nearly always the user’s first language, and when it’s not, at least you know the user understands it well enough to be able to use a browser..

Bonus points: Seriously, use Accept-Language. If you don’t, you’re a dick.

Clutter is back

Remember how, back in 1999, we all thought Google looked awesome because it was so clean & crisp and didn’t get in your face and everyone copied the trend? Well, that seems to have come to an end.
Here’s Yahoo in 1997. (I love how it has an ad for 256mb of memory.)
Here’s Yahoo now.

The 1997 version was annoying to use (remember screen resolutions in the 90s? No? You’re too young to read this, go away) because it was so cluttered.
The 2012 version is worse and makes me want to gouge my eyes out.

Even Google is getting all in your face these days, with search-as-you-type and whatnot. Bah. DuckDuckGo seems to be the exception (at least as far as search engines go). It offers power without wagging it in your face.

Pro tip: don’t put a bazillion things on your pages. Duh.

2013 Wishlist

My web-wishlist for 2013 is really quite simple: I want a usable web. Not just people with the latest and greatest javascript-enabled feast-your-eyes-on-this devices. For everyone. Including those who use text-to-speech, or the blind, or people on older devices. Graceful degradation is key to this. So please, when you come up with a grand feature, think about what we might be giving up on as well. Don’t break links. Don’t break the back button. Don’t break the web.


SSH gateway shenanigans

I love OpenSSH. Part of its awesomeness is its ability to function as a gateway. I’m going to describe how I (ab)use SSH to connect to my virtual machines. Now, on a basic level, this is pretty easy to do, you can simply port forward different ports to different virtual machines. However, I don’t want to mess about with non-standard ports. Or you could give each of your virtual machines a seperate IP address, but then, we’re running out of IPv4 addresses and many ISPs stubbornly refuse to use IPv6. Quite the pickle!

ProxyCommand to the rescue!

ProxyCommand in ~/.ssh/config pretty much does what it says on the tin: it proxies … commands!

Host fancyvm
        User foo
        HostName fancyvm
        ProxyCommand ssh foo@physical.box nc %h %p -w 3600 2> /dev/null 

This allows you to connect to fancyvm by first connecting to physical.box. This works like a charm, but it has a couple of very important drawbacks:

  1. If you’re using passwords, you have to enter them twice
  2. If you’re using password protected key files without an agent, you have to enter that one twice as well
  3. If you want to change passwords, you have to do it twice
  4. It requires configuration on each client you connect from

Almighty command

Another option is the “command=” option in ~/.ssh/authorized_keys on the physical box:

command="bash -c 'ssh foo@fancyvm ${SSH_ORIGINAL_COMMAND:-}'" ssh-rsa [your public key goes here]

Prefixing your key with command=“foo” will ensure that “foo” is executed whenever you connect using that key. In this case, it will automagically connect you to fancyvm when you log in to physical.box using your SSH key. This has a small amount of setup overhead on the server side but it’s generally the way I do things. The only real drawback here is that’s impossible to change your public key, which isn’t too bad, as long as you keep it secure.

The Actual Shenanigans

The command option is wonderful, but some users can’t or won’t use SSH key authentication. That’s a bit trickier, and here’s the solution I’ve come up with – but if you have a better one, please do share!

We need three things:

  1. A nasty ForceCommand script on the physical box
  2. A user on the physical box (with a passwordless ssh key pair)
  3. A user on the VM, with the above user’s public key in ~/.ssh/authorized_keys

This will grant us the magic ability to log in to the VM by logging in to the physical box. We only have to log in once (because the second part of the login is done automagically by means of the key files). A bit of trickery will also allow us to change the gateway password, which was impossible with any of our previous approaches.

Let’s start with a change in the sshd_config file:

Match User foo
        ForceCommand /usr/local/bin/vmlogin foo fancyvm "$SSH_ORIGINAL_COMMAND"

This will force the execution of our magic script whenever the user connects. And don’t worry, things like scp will work just fine.

And then there’s the magic script, /usr/local/bin/vmlogin:

#!/bin/bash

user=$1
host=$2
command="${3:-}"

if [ "$command" = "passwd" ] ; then
        bash -c "passwd"
        exit 0
fi
command="'$command'"
bash -c "ssh -e none $user@$host $command"

Update 2016

The above script no longer works with SFTP on CentOS 7 with Debian guests. Not sure why, and I’m too lazy to find out. So here’s a script that works around the problem.

#!/bin/bash

user=$1
host=$2
command="${3:-}"

if [ "$command" = "passwd" ] ; then
        bash -c "passwd"
        exit 0
fi

# SFTP has been fucking up. This ought to fix it.
if [ "$command" = "/usr/libexec/openssh/sftp-server" ] || [ "$command" = "internal-sftp" ] ; then
        bash -c "ssh -s -e none $user@$host sftp"
        exit 0
fi

command="'$command'"
bash -c "ssh -e none $user@$host $command"

And there you have it, that’s all the magic you really need. Everything works exactly as if you were connecting to just another machine. The only tricky bit is changing the gateway password: you have to explicitly provide the passwd command when connecting, like so ssh foo@physical.box passwd


On Ruby & PHP

So I quit my job. That was a relatively big change for me, after spending 3 years sat on the same chair working with the same team. But I felt it was time for a change. The new job is quite a bit different from the old one. One rather obvious difference is the programming language .. I’ve gone back to my proverbial roots and am now churning out PHP on a daily basis. And because I’m such a programming nut, I’m working on some after-hours Ruby projects as well.

PHP and Ruby are both obviously very different from Java - and very different from each other. Sometimes in good ways, sometimes in bad ways. Syntax wise, PHP and Java are very similar. The dollar sign on my keyboard is definitely seeing a lot more action now, but other than that not many dramatic finger changes are required. PHP doesn’t have have a bunch of things I like, like final method arguments, multi-threading or a half decent collection of standard classes. What it does have is excellent documentation and easy & interesting meta-programming. That, and it’s obviously much more tailored for the web than Java. For the first time in 3 years I don’t actually need a whole armada of frameworks just to get something simple done. Still .. I miss my (Concurrent)HashMap, ArrayList, and most importantly my trusted BigDecimal.

And then there’s Ruby .. Even after a couple of months of Ruby, I have very mixed feelings about it. The documentation sucks. Don’t argue with me, it really does suck big fucking donkey balls. RDoc (especially online) is to JavaDoc what Spam is to nice and tasty Prosciutto. Then there’s the lack of backwards compatibility (granted, Java may take this a bit too far, but still).

And then … there’s the syntax. This is where my feelings are mixed, rather than plain negative. There’s too much choice. You can mix and match a bunch of styles, do..end, curly braces, semicolons or no semicolons, method arguments with or without parentheses, and the list probably goes on. This leads to code that’s hard to read (and thus hard to maintain), especially when different people stick to different styles. On the other hand, it also leads to nice syntactic sugar. Clever use of methods can make it look like you’re defining your own keywords, which is very nice for creating your own DSLs. It’s also rather nice to have Operator Overloading, and being able to treat everything like an object also has its benefits.

Still .. my feelings remain mixed. And in spite of everything, I feel like Java is still the best choice for Serious Work. Not only because of the amazing documentation, the excellent standard library or even the massive community, but also for the amazing JVM, its security model and its thriving ecosystem.

Java ain’t dead, folks.