Wednesday, March 21, 2012

Static Variables Considered Harmful

In my last post, I mentioned that a particular programmer had abused static initializers. I've been thinking more about this, and decided that the real problem was that this programmer actually abused static variables. The initializer abuse was simply a side effect, an attempt to make a bad design work.

To start, I'm going to give you my thoughts on static initializers: they're a code smell in and of themselves. If you're not familiar with a static initializer, here's an example: we have a static Map used for a lookup table, and want need to populate it before use. The static initializer is run at the time the class is loaded, before any code is allowed to use the class, so we can guarantee that the map will be populated.

public class MyClassWithALookupTable
    private static Map<String,String> states = new HashMap<String,String>();
        states.put("AK", "Alaska");
        states.put("AL", "Alabama");
        // ...

This sort of code is, in my opinion, the only valid use for a static initializer: initializing an object that will never change, using code that is guaranteed never to throw. In effect, creating a constant.

The “guaranteed never to throw” part is important: static initializers run before any code is allowed to use the class; what happens if they do throw? The language spec says that static initializers are not permitted to throw checked exceptions, but they are allowed to throw unchecked exceptions (although a smart compiler will make you hide the exception in a called method). If the initializer does throw, you'll get an ExceptionInInitializerError the first time you access the class.

How do you debug such errors? You can't use a debugger, because the class has to be loaded to do set a breakpoint. You can insert a logging statement — assuming that your logging framework is able to initialize (one of the more painful things I've ever debugged was with a homegrown addition to Log4J that had a static initializer that threw). Even if you log the exception, you have to find where it's being thrown: people who abuse static initializers tend to do it a lot, so you might have to work through a long list of classes, adding exception handlers to each, before you find the one that's broken.

OK, so static initializers are a code smell. How do I make the leap from there to “static variables considered harmful?”

Going back to the legacy application in question, the original programmer used a static initializer to populate a lookup map. But the values to populate that map came from Hibernate. And since Hibernate was accessed through a DAO loaded as a Spring bean, he had to create a way to access the application context in a static manner. So he created a new class, with a static variable to hold the app context. And he wired this class into the DispatcherServlet configuration.

It's a long list of hacks, but here's the thing: every one of them logically proceeds from the one before it. And it comes down to the lookup table whose contents are allowed to vary.

There are certainly other solutions that he could have used. For example, I replaced the static initializer with a Spring post-construct from the associated DAO. But this is only one example of how static variables (versus constants) can lead you astray. More common are concurrency bugs, or “how did that value get there?” bugs. It seems a lot smarter to simply say “don't do this,” with the caveat for experts of “don't do this (yet).”

Monday, March 19, 2012

Synchronizing put() is Not Sufficient

Have you ever seen a class like this, which synchronizes put() but not get()? Have you ever written one?

public class MyConcurrentlyAccessedLookupTable
    private Map<String,MyDataObject> data = new HashMap<String,MyDataObject>();
    // ...
    public MyDataObject get(String key)
        return data.get(key);
    public synchronized void put(String key, MyDataObject value)
        data.put(key, value);

I've been working over a legacy codebase recently, and have seen a lot of them. And I want to track down the original developer, give him a good shake, and say “this only works bacause you never put anything into the map after initializing it!” Actually, this is one of the least egregious ways that he (yes, he, I know his name) managed to fail at concurrent programming. Don't get me started on his abuse of static initializers.

OK, it feels good to let that out. I can at least understand why he thought it would work. A HashMap, by its nature is resilient to incorrectly managed concurrent access. Under normal operation, new items get added to a linked list; a get() should either see or not see the new entry. With put(), because two new entries might get added to the same bucket list and the scheduler might decide to switch threads in the middle, one of the entries might disappear.

And then there's resize: doubling the size of the hash table to reduce the depth of the bucket chains. And, at least in the Sun implementation, there's no attempt to preserve the chains during this operation (I can't see why any implementation would, but there might be one). So put() could get a chain using the old table, but that chain could be assigned to a different bucket by the time the new entry is attached.

OK, so in both cases get() fails to find an entry that is actually in the map, is that really so bad? Well, is it bad if you spend days trying to figure out why a transaction failed when the data was obviously present? Or if that failed transaction caused your company to lose a sale?

The problem with concurrency bugs is that they aren't easily reproducible. Sometimes they appear once and never again. But they're still bugs, and they're not that difficult to avoid: when in doubt, synchronize all access to shared state. If you understand your access patterns (eg, you always initialize and then only read), don't synchronize at all.

But never go halfway.

Wednesday, March 7, 2012

A Month with Ruby and Rails

My current project is moving slowly, so I've been making changes to our corporate website. It uses Ruby on Rails, so has given me the opportunity to learn both in the context of a real project. At this point I think I'm comfortable enough with both Ruby and Rails to make some comments, albeit still from a novice's perspective. And my first comment is that I don't love Ruby, but I don't hate it either. I was told that I'd go down one of those two paths, but it hasn't happened yet.

There's a lot to like about Ruby, and my favorite is that Nil is an object, allowing constructs like foo.present?. This doesn't eliminate null-pointer errors, but it reduces the number of times that you have to worry about them. For example, the conditional expression foo == "bar" doesn't throw, it returns false (which is just what you'd want).

I also like the way that Ruby allows any method to take an optional block. This is the base for standard Ruby constructs like each, also allows for a form of “progressive enhancement.” A nice example of this is in the event_calendar gem, where you can pass in a block that decorates each event (in the README, it's used to wrap each event in a link). If you don't pass that block, you get a basic calendar.

OK, now it's time for my dislikes, and chief of these is that objects and hashes are two different things. One of the nice features of JavaScript is that foo["bar"] and do the same thing: they retrieve a member from an object. Ruby, however, is surprisingly formal in its approach to member access — indeed, it's much like Java in that member variables are private and you have to create accessor methods for them. And while Ruby gives you some syntactic sugar to make the process easier, it's still annoying boilerplate, especially when you have to manually extract hash arguments in initialize.

I can understand the desire to limit access to member variables. But then Active Record throws it's own wrench into the mix, by defining a [] method. Which some libraries (like event_calendar) use because it has the nice property of returning Nil if the member doesn't exist. Which in turn means that, if you want to pass your non-Model DTO to those libraries, you have to implement the following reflective code:

def [](attr_name)

That brings me to my second dislike, which is focused more on Rails than Ruby: the “DSL” approach to programming can quickly become uncontrolled. When I was first exposed to Rails, I really liked the way that it leveraged method_missing to provide convention-driven operations such as find_person_by_last_name. After working with it for a while, I've come to the conclusion that it's really 10,000 helper methods to cover corner cases where convention doesn't work. And they're all defined by modules that are transparently mixed into your classes.

This wouldn't be so bad, except the documentation is truly horrendous. Take ActiveRecord: it's a module that consists solely of other modules, so its documentation page just lists those modules. There's no index, so finding anything becomes a treasure hunt, often involving the method method. Even then, I find myself reading source code. For example, the [] method is defined by ActiveRecord::Base, but there's no mention of it in the API docs. Google truly has become my friend.

I'll close with StringInquirer. This is a class that lets you write foo.value? rather than foo == "value". When I discovered this (trying to understand Rails.env), I sent an email to a friend that described it as “either a brilliant use of method_missing or an utter perversion of same.” I suspect that my final answer to that question will determine whether I love or hate Ruby.

Friday, March 2, 2012

Rise of the (64 bit) Machines

In my bytebuffer presentation, I note that 64-bit machines have become inexpensive and ubiquitous, and ask the rhetorical question “what are we doing with all this memory?” This is the setup to a fake graph that proclaims that it's being used to play FarmVille™.

A joke, yes, but it points to a bigger truth: when a consumer-class machine has multiple cores and 8Gb of memory, there's a lot of computing power that isn't being used. And these machines represent the next round of corporate desktop purchases. Five years from now, a typical office user will have four cores, eight gig of RAM, and a terabyte of disk. You don't need that to run Outlook.

I think this is the next opportunity for virtualization technology. To date, virtualization has been used to carve a large machine into smaller pieces. In the future I see it being used to carve up desktop machines: the person sitting in front of the computer will get only a fraction of the capacity of his or her machine. The IT department will get the rest.

There are a lot of problems to be solved before a business runs its critical infrastructure on the desktops of its employees. For one thing, the power buttons have to be disabled — or at least repurposed to shut down only the virtual machine. And virtualization software will have to evolve, to be hidden from the desktop user. At the least, the local display, keyboard, and mouse have to act like they always have.

But more important, our ways of thinking about enterprise-scale software will also have to change. We must write software that is completely unaware of where it's running. And which can recover even if the computer is unplugged.

Today's “cloud,” be it EC2 or a corporate data center, is our chance to experiment. Think of it as training wheels for a fully distributed future.