Friday, December 31, 2010

Year-end Reset

Memo to self: if you decide to post a series of articles, make sure they're all written before you post the first one.

The series on primitives got a little out of hand. After my last post, I decided that I needed to write my own in-place integer sort before the next. And then the real world intervened. At this point, it's about 80% done and there would be five more sections.

When I started the blog two years ago, it was intended for topics that didn't require a lot of thought. My goal was to come home from work with an idea, write the draft, polish it in the morning, then post. However, one idea often leads to another, or I'll have a couple of ideas that seem related, so that led to series of posts. The series on primitives doesn't fit that goal, so it's going to get moved to my main website.

And in the new year, I'm back to short takes on whatever programming-related topics I happen to be thinking about.

Wednesday, October 27, 2010

Living Among Primitives, part 3: Collections

The last post “turned an object inside-out” to eliminate the overhead of a list of discrete objects. While that technique allows you to pack a lot of objects into a relatively small amount of space, your programs will be a lot cleaner if you can pass around Point objects rather than a PointHolder and its index. The java.util.AbstractList class can help you reach a middle ground.

public class PointHolder2
extends AbstractList<Point>
{
    private int _size;
    // ...

    @Override
    public int size()
    {
        return _size;
    }

    @Override
    public boolean add(Point point)
    {
        _x[_size] = point.getX();
        _y[_size] = point.getY();
        _z[_size] = point.getZ();
        _size++;
        return true;
    }

    @Override
    public Point get(int index)
    {
        return new Point(_x[index], _y[index], _z[index]);
    }

    @Override
    public Point set(int index, Point point)
    {
        _x[index] = point.getX();
        _y[index] = point.getY();
        _z[index] = point.getZ();
        return point;
    }

AbstractList is a tool for implementing lists with an application-specific backing store. Out of the box, you have to implement the get() and size() methods, and the setter methods all throw UnsupportedOperationException. But as you can see, it doesn't take a lot of work to implement these, and the result is nearly identical in terms of performance to an ArrayList<Point>.

And the Point class has reappeared, making your application code cleaner. But now they're created as-needed, and will be collected as soon as the program is done with them; you won't pay the memory overhead to keep them around. And assuming that they're relatively short-lived once created, they should rarely if ever make their way out of the heap's young generation, so will be cheap to collect.

There is one important difference between PointHolder and ArrayList<Point>, not immediately apparent from this example: since the Point instances do not actually exist within the list, changes to them won't be reflected in the PointHolder data. As implemented, each Point instance contains its own copies of the primitive values. To make the Point mutable, you'll need to design them with a reference back to the PointHolder (which I won't go into because I don't like mutable data-only objects).

Now that you have a space-efficient List<Point>, you may be thinking of how you'd access it — for example, how to search for a specific point. And you might think of calling Collections.sort(), followed by Collections.binarySearch(). And while this will work, it's not very efficient (because you're constantly creating Point instances), and has hidden memory costs. Those are topics for the next post.

Tuesday, October 19, 2010

Living Among Primitives, part 2: Data Structures

It may seem strange to talk about “living among primitives” and then jump to data structures, but the truth is that most programs work with structured data. So even in a world where you've turned to primitives, you need to think about how you'll organize those primitives.

And the best way, surprise, is to use Java objects — but with an eye toward the overheads that they introduce. In the last post I noted that an Integer adds 8 bytes of overhead to 4 bytes of data. That number came from Java Platform Performance: Strategies and Tactics, and I suspect it's higher for 64-bit JVMs: I recall reading in an early Hotspot doc that the overhead consisted of a reference to the object's class (which would be 4 bytes under 32-bit, 8 under 64-bit) and the object's identity hashcode. Arrays add another 4 bytes of overhead to hold their length.

As a case study for the rest of this post, I'm going to look at a three-dimensional Point class:

public class Point
{
    private float _x;
    private float _y;
    private float _z;
}

The data in this object takes up 12 bytes (float is a 32-bit IEE-754 floating-point value), and we'll be conservative on overhead and say it's 8 bytes (who knows, perhaps a 64-bit JVM will efficiently encode a reference to the object's class). So a total of 20 bytes per instance, 50 million instances per gigabyte. But you'll need something to manage those 50 million instances:

Point[] points = new Point[50000000];

On a 64-bit JVM, each element in that array is a reference, consuming 8 bytes; that means nearly half a gigabyte just to keep track of your gigabyte of points! And from here on out I'm going to assume that you have data volumes that require a 64-bit JVM.

One way to deal with this is to turn the structure inside-out: rather than an array of points, why not an object that holds arrays of values:

public class PointHolder
{
    private float[] _x;
    private float[] _y;
    private float[] _z;
    
    // constructor will initialize the arrays
    
    public float getX(int index)
    {
        return _x[index];
    }
    
    // and so on
}

This approach may make your code a little more complex, but it's eliminated 16 bytes of overhead per point: the references in the original array, plus the per-object overhead. You can now fit nearly 90 million points in a gigabyte. And it has another benefit: you can change the implementation without changing your code. For example, if you're dealing with truly huge arrays of points, you could move them off the heap entirely, using a memory-mapped file.

The problem with “turning the object inside-out” is that you have to give careful thought to how you work with the data: for example, how do you sort these points? How do you find a particular point? That will be the topic of the next few posts.

Monday, October 18, 2010

Living Among Primitives, part 1

There are a lot of people who say that Java shouldn't have primitive data types, that everything should be an object. And there is some truth to their arguments. Most business applications don't do intensive numeric operations, and for those that do, double is a poor substitute for a fully-thought-out Money. You could even claim that the presence of primitives leads to a procedural style of code: things like using an index variable rather than an iterator to access a list. And languages such as Ruby and Python follow the “everything's an object” path without apparent ill effect.

But I feel that primitives are in fact one of the strong points of the Java language, and languages that don't have native primitives often sprout hacks to replace them. Because there are several reasons for primitives to exist.

One is performance. It's true that, for simple expressions, evaluated once, you can't tell the difference. But there's a reason that libraries such as NumPy, which performs matrix math (among other things), are written in C or C++ rather than Python.

There's a primitive at the heart of any numeric object — after all, that's what the CPU works with. So when you invoke a method on a numeric object, not only do you have the overhead of a function call, but you're calling into native code. True, Hotspot could recognize mathematical operations and translate them into inline native code. But that still leaves the cost of allocating and collecting objects that are used only during the evaluation of a function. Unless Hotspot recognized those and transformed them into, well, primitives.

Another, often more important reason to use primitives is memory consumption: an int takes 4 bytes, but on a 32-bit Sun JVM, an Integer adds 8 bytes of overhead, there's also overhead to reference that Integer: 4 bytes on a 32-bit JVM, 8 bytes on a 64-bit JVM. Again, that doesn't seem like much, unless your application is dealing with hundreds of millions of numeric values.

And hundreds of millions of numeric values is exactly what I'm dealing with on my latest project … to the point where I recently told a colleague that “this application should be written in C.” The next few posts will dive into some of the issues that you'll face when living among primitives.

Monday, September 20, 2010

BigMemory

Last week Terracotta released BigMemory, which stores objects outside of the Java heap and “defuses the GC time bomb.” Since I've written the top-ranked relevant article when you Google for “non-heap memory” (it's on page 2), my website has seen a spike in volume. Unlike previous spikes, people have been poking around, which is nice to see.

I did some poking around myself, both to Terracotta's blogs, and to some of the Twitter posts and articles that linked to my site. And I saw a lot of comments that disturbed me. They ranged from (paraphrased) “this is just hype, it isn't anything new,” to “I wonder if Oracle will file a patent lawsuit on this,” to “the JVM should be doing this itself,” to the ever-popular “people should just learn to write efficient code.” I'll take on that last one first.

But first some caveats: I've never used EHCache outside of Hibernate, and even then I simply followed a cookbook. My closest contact with EHCache development was attending a Java Users Group meeting with a badly jetlagged Greg Luck, followed by beer with a him and a bunch of other people. I can't remember what was discussed around the table, but I don't think it was cache internals. But I do know something about caches, and about the way that the JVM manages memory, so I feel at least qualified to comment on the comments.

And my first comment is: a cache is efficient code … when appropriately used. You can spend days micro-optimizing your code, only to find it all wasted with a single trip to the database.* You use a cache when it's expensive to retrieve or create often-used data; the L1 and L2 caches in your processor exist because memory is relatively slow, so it's expensive to retrieve. Similarly, it's expensive to create a web page (and load its data from the database), so you would use EHCache (or Akamai) to cache the pre-built page (EHCache is cheaper).

The interface to a cache is pretty straightforward: you give it a key, it gives you a value. In the case of the processor's L1 cache, the key is an address, the value is the bytes around that address. In the case of memcached, the key is a string and the value is a blob. In Java, this corresponds to a Map, and you can create a simple cache using a LinkedHashMap.

The reason to use a LinkedHashMap, versus a regular HashMap, is eviction: removing old entries from the map. Although memory is cheap, it's not infinite, and there's no good reason to keep every piece of data ever accessed. For example, consider Stack Overflow: when a question is on the home page, it might get hundreds of views; clearly you don't want to keep reading it from the database. But after it drops off the home page, it might get viewed once a week, so you don't want to keep it in memory.

With LinkedHashMap, you can implement a simple eviction strategy: once the map reaches a certain size, you remove the head entry on each new put(). More sophisticated caches use more sophisticated strategies: for example, weighting the life of a cached object by the amount of work that it takes to load from source. In my opinion, this is why you go with a pre-built solution like EHCache: they've thought through the eviction strategies and neatly packaged them.

So that's how a cache works. What's interesting is how it interacts with the garbage collector. The point of a cache is to hold content in memory for as long as it's valuable; for a generational garbage collector like Sun's, this means it will end up in the tenured generation … but eventually get removed. And this causes two issues.

First is the cost to find the garbage. The Hotspot GC is a “mark-sweep” collector: it starts at “root” references and traces through the entire object graph to find live objects; everything else is garbage (this is all covered in my article on reference objects; go back and click the link). If you have tens or hundreds of millions of objects in your heap (and for an eCommerce site, that's not an unreasonable number), it's going to take some time to find them all: a few milliseconds. This is, as I understand it, the price that you pay for any collection, major or minor.

A major collection, however, performs one additional step: it compacts the heap after collecting the garbage. This is a big deal. No matter how fast your CPU and memory, it takes time to move gigabytes of memory around. Particularly since the JVM is updating all the references to the moved objects. But if you've ever written a C application that fragmented its heap, you'll thank the Hotspot team every time you see a major collection take place.

Or maybe you won't. Heap compaction is an artifact of tuning the JVM for turn-of-the-century computers, where 1Gb of main memory was “server class,” and swap (the pagefile) was an important part of making your computer usable for multiple applications. Compacting the heap was a way to reduce the resident set size, and improve the overall performance of an application.**

Today, of course, we have desktop-class machines with multiple gigabytes, running an OS and JVM that allow direct access to all of it (and more). And there's been a lot of work in the past few decades to reduce the risk of fragmentation with C-style memory allocation. So maybe it is time to rethink garbage collection in the JVM, at least for the tenured generation, and introduce freelists (of course, that will inflame the “Java is a memory hog” crowd). Or maybe it's time to introduce a new generation, for large tenured objects. Maybe in JDK 1.8. I'm not holding my breath, and neither, it seems, were the EHCache developers.

So to summarize: EHCache has taken techniques that are well-known, and applied them to the Java world. If I were still working in the eCommerce field, I would consider this a Good Thing, and be glad that they invested the time and effort to make it work. Oh, and as far as patents go, I suspect that IBM invented memory-mapped files sometime in the 1960s, so no need to worry.


* Or, in the words of one of my favorite authors: “No matter how subtle the wizard, a knife between the shoulder blades will seriously cramp his style.”

** If you work with Swing apps, you'll often find a minimize handler that explicitly calls System.gc(); this assumes that the app's pages will be swapped out while minimized, and is an attempt to reduce restore time.

Friday, September 17, 2010

Project Management

I recently had a revelation: the first time that I worked with a Project Manager — a person whose sole role is maintaining a schedule and coordinating the tasks on that schedule — was 2002. For nearly 20 years of my career, I worked on teams where project management was a subsidiary role of the team lead or development manager. True, my career has mostly been spent at small companies, some that couldn't afford a dedicated project manager. But there were also a few larger ones — including GE, which you'd expect to be a bastion of project management and rigorous checklist checkers.

Before continuing, I want to say that, unlike many developers, I don't disdain project management per se. I've worked on projects that have succeeded (or at least failed less badly) because a talented project manager pulled together people with diverging goals, people who might have otherwise ignored or actively undercut one-another. I've also worked on projects where the project manager seemed to be actively inflaming the participants. Either way, it's a role with impact, one that cannot be ignored.

So why did I spend two thirds of my career without every seeing a project manager? I think the answer is that the structure of software development organizations changed over that time, along with the companies where they reside. And that's not necessarily a Good Thing.

But first, a little history. Corporate management, as we know it today, didn't exist before the mid-1800s. Prior to that time, business were small and generally confined to a single location; a few hundred employees was an industrial giant. The railroads changed all that: they hired thousands of employees, for a myriad of functions, and those employees were dispersed across the thousands of miles of terrain served by the railroad.

Up to that point, management relied on instant, face-to-face communication between front office and factory floor. This simply was not going to work for the railroads. In response, they adopted and adapted the hierarchical structure of the military, and even some of its terminology. The corporation was now composed of semi-autonomous divisions, which took strategic direction from the home office, but had freedom in tactical operations. Each division had its own complement of functional organizations such as maintenance shops, and those functional organizations kept largely to themselves.

This model worked well for the railroads, and for the giant industrial corporations that followed. You can even see the functional structure embodied in the layout of a manufacturing plant. And it permeated the thinking of the people working for those corporations: at GE in the late 1980s I received a five minute dressing-down from a mid-level manager, for daring to use a photocopy machine that belonged to his group. Even in the software industry, the hierarchical mindset prevailed: as you read The Mythical Man-Month, you won't find “project” managers, just managers.

So where do these project managers come from? I think the answer is construction.

Whether you hire a general contractor for your home remodel, or Bechtel for a billion dollar highway project, you get a project manager. And they're necessary: the construction industry is fragmented into dozens different trades and specialties within trades, even at the level of home repair. Carpenters, electricians, plumbers, masons, sheetrock installers, painters, tilers, landscape designers, and so on … you need all of them, and none of them do the other's job. And more important, each works for only a small part of the project schedule, and then they're gone. And if they don't start at exactly the right time, the whole project gets delayed.

It works for construction, so why not software?

In the 1980s and 1990s, corporations started to adopt “matrix management.” The reason was simple economics: self-contained organizations waste money. Just as you wouldn't want to pay a sheetrock crew to sit idle while the carpenters are building stud walls, most organizations don't want to pay a DBA to sit idle while the developers write front-end code. So the DBA team gets matrixed to the project team: when the project team needs a DBA, one will be assigned.

From the company's perspective, this maximizes employee utilization. And from the DBA's perspective, it's a better career path: rather than being isolated in a product-specific development, she gets to work with her peers and have her work recognized by a manager who doesn't think that mauve databases have more RAM. Everybody wins.

But something I noticed, when working with matrix organizations, is that you could never find a DBA when you needed one — or, as matrix management spread, any other specialist. They always seemed to have other projects demanding their time. Perhaps that was really true: for a corporation wanting to reduce costs, why stop with sharing people, why not understaff as well? But I also noticed that you could always get a DBA to turn up for meetings where there were donuts present.

And what I inferred from this is that matrix management creates a disincentive to project loyalty. After all, the specialist career path depends more on pleasing the specialist manager than the project lead. In the best of cases, specialists can cherry-pick projects that catch their interest, ignoring the rest. In the worst cases, there are lots of places to hide in a matrix organization.

This effect goes deeper than a few DBA's, however. In a fully-matrixed organization, project teams are ad hoc. You no longer have developers who are working on a product, they work on a project. And when it's done — or fails — they move on to another project. Taking with them the in-depth knowledge of what they did and what they should have done. Long term loyalty simply doesn't exist.

And with the creation of ad hoc teams, you need an ad hoc manager: the project manager. So to reiterate, it's not that project managers are bad per se, it's what their presence says about the organization that disturbs me.

Wednesday, August 11, 2010

Agile Isn't New

I recently read C. A. R. Hoare's 1980 ACM Turing Award speech, “The Emperor's Old Clothes” (currently downloadable here). The theme of this speech is simplicity, in particular how lack of simplicity in a programming language makes it harder to write error free code — summarized as “so simple that there are obviously no deficiencies [versus] so complicated that there are no obvious deficiencies” (emphasis as written). This, of course, resonates with my feelings about mental models.

About midway through the speech, Hoare describes a failed project: a new operating system that was to dramatically extend the capabilities of his company's former offering. It reads like a recap of The Mythical Man-Month, right down to the programmers' assumption that memory was infinite. But where Brooks turned to organizational strategies to dig his team out from failure, Hoare did something else:

First, we classified our […] customers into groups […] We assigned to each group of customers a small team of programmers and told the team leader to visit the customers to find out what they wanted […] In no case would we consider a request for a feature that would take more than three months to implement and deliver […] Above all, I did not allow anything to be done which I did not myself understand.

That quote could have come from a book on Extreme Programming. Short iterations, understandable stories, pulling the customer into the development process. It's all there.

Or, I should say, it was all there. In 1965. Presented to a group of practicing programmers in 1980. And then “rediscovered” by Beck, Jeffries, et al in the 1990s.

Why do we keep forgetting?

Monday, August 9, 2010

Ant, Taskdef, and running out of PermGen

Although I've switched to Maven for building Java projects (convention over configuration ftw), I still keep Ant in my toolbox. It excels at the sort of free-form non-Java projects that most people implement using shell scripts.

One reason that Ant excels at these types of projects is that you can easily implement project-specific tasks such as a database extract, and mix those tasks with the large library of built-in tasks like filter or mkdir. And the easiest way to add your tasks to a build file is with a taskdef:

    <taskdef name="example"
             classname="com.kdgregory.example.ant.ExampleTask"
             classpath="${basedir}/lib/mytasks.jar"/>

Last week I was working on a custom task that would retrieve data by US state. I invoked those with the foreach task from the ant-contrib library, so that I could build a file from all 50 states. Since I expected it to take several hours to run, I kicked it off before leaving work for the day.

The next morning, I saw that it had failed about 15 minutes in, having run out of permgen space. And the error happened when it was loading a class. At first I suspected the foreach task, or more likely, the antcall that it invoked. After all, it creates a new project, so what better place to create a new classloader? Plus, it was in the stack trace.

But as I looked through the source code for these tasks, I couldn't see any place where a new classloader was created (another reason that I like Ant is that it's source is generally easy to follow). That left the taskdef — after all, I knew that my code wasn't creating a new classloader. To test, I created a task that printed out its classloader, and used the following build file:

<project default="default" basedir="..">

    <taskdef name="example1"
             classname="com.kdgregory.example.ant.ExampleTask"
             classpath="${basedir}/classes"/>
    <taskdef name="example2"
             classname="com.kdgregory.example.ant.ExampleTask"
             classpath="${basedir}/classes"/>

    <target name="default">
        <example1 />
        <example2 />
    </target>

</project>

Sure enough, each taskdef is loaded by its own classloader. The antcall simply exacerbates the problem, because it executes the typedefs over again.

It makes sense that Ant would create a new classloader for each project, and even for each taskdef within a project (they can, after all, have unique classpaths). And as long as the classloader is referenced only from the project, it — and the classes it loads — will get collected at the same time as the project. And when I looked in the Project class, I found the member variable coreLoader.

But when I fired up my debugger, I found that that variable was explicitly set to null and never updated. The I put a breakpoint in ClasspathUtils, and saw that it was being invoked with a “reuse” flag set to false. The result: each taskdef gets its own classloader, and they're never collected.

I think there's a bug here: not only is the classloader not tied to the project object, it uses the J2EE delegation model, in which a classloader attempts to load classes from its own classpath before asking its parent for the class. However, the code makes me think that this is intentional. And I don't understand project life cycles well enough to know what would break with what I feel is the “correct” implementation.

Fortunately, there's a work-around.

As I was reading the documentation for taskdef, I saw a reference to antlibs. I remembered using antlibs several years ago, when I was building a library of a dozen or so tasks, and didn't want to copy-and-paste the taskdefs for them. And then a lightbulb lit: antlibs must be available on Ant's classpath. And that means that they don't need their own classloader.

To use an antlib, you create the file antlib.xml, and package it with the tasks themselves:

<antlib>
    <taskdef name="example1" classname="com.kdgregory.example.ant.ExampleTask"/>
    <taskdef name="example2" classname="com.kdgregory.example.ant.ExampleTask"/>
</antlib>

Then you define an “antlib” namespace in your project file, and refer to your tasks using that namespace. The namespace specifies the package where antlib.xml can be found (by convention, the top-level package of your task library).

<project default="default" 
    xmlns:ex="antlib:com.kdgregory.example.ant">

    <target name="default">
        <ex:example1 />
        <ex:example2 />
        <antcall target="example"/>
    </target>

    <target name="example">
        <ex:example1 />
        <ex:example2 />
    </target>   

</project>
It's extra effort, but the output makes the effort worthwhile:
ant-classloader-example, 528> ant -f -lib bin build2.xml 
Buildfile: /home/kgregory/tmp/ant-classloader-example/build2.xml

default:
[ex:example1] project:     org.apache.tools.ant.Project@110b053
[ex:example1] classloader: java.net.URLClassLoader@a90653
[ex:example2] project:     org.apache.tools.ant.Project@110b053
[ex:example2] classloader: java.net.URLClassLoader@a90653

example:
[ex:example1] project:     org.apache.tools.ant.Project@167d940
[ex:example1] classloader: java.net.URLClassLoader@a90653
[ex:example2] project:     org.apache.tools.ant.Project@167d940
[ex:example2] classloader: java.net.URLClassLoader@a90653

BUILD SUCCESSFUL
Total time: 0 seconds

Bottom line: if you're running out of permgen while running Ant, take a look at your use of taskdef, and see if you can replace it with an antlib. (at least one other person has run into similar problems; if you're interested in the sample code, you can find it here).

Thursday, June 17, 2010

Prototypes

In late January, 1984, I wrote about a dozen lines of Macintosh Pascal code. It drew series of slightly offset circles, creating what appeared to be a maze of connected pipes. That program, unchanged except for a pretentious and wholly inaccurate name given by a marketroid, went on to be the main demo program (and box artwork) for the product. And, barely a month into my first full-time programming job, I had learned the most important lesson of my career:

There's no such thing as a prototype

That utility that you wrote after having a couple of beers at lunch? It's going to be the “power user's” main tool for accessing your application. Expanded, of course, possibly beyond recognition. But deep within will be your drunken code and off-color variable names. And more important, your name will be on the check-in, so you'll always get the support calls — even for the parts that you didn't write. Worse, this code is a broken window, attracting more bad code until there's nothing left but a mess.

Most programmers learn this lesson at some point in their career. As a result, they either hide their throwaway code or make sure that someone else gets the blame for it. Newbie programmers haven't figured it out yet, and will give you a lot of attitude if you suggest taking care when writing such code. Newbie programmers that get promoted to management are the worst: they're the ones who decide that throwaway code should be part of the product.

All of which contributes to the main reason that I like Agile methodologies: they don't leave much room for second-rate code. For one thing, if you write all code test-first, even “spike solutions,” then you have at least some assurance of quality; less if you write tests just to achieve a set coverage metric, more if you use your tests as an opportunity to think about your design.

But you can write tests in any environment. Where Agile stands out is in its use of a backlog for all work, and its attitude that “tomorrow we ship.”

The former acts as a restraint on management: sure, that utility program would be useful to a large audience, but before it becomes part of the product it has to go into the backlog. And prioritized against all other feature requests. And estimated, because once it becomes a real feature it will have to do more than forble the frobulator.

The second point — that an Agile product should always be ready to ship — acts as a restraint on programmers, albeit in a counter-intuitive way. In my opinion, the root of almost all bad code is the feeling that “we gotta get it done”: there's a deadline, there's no time to do it right, so slap something together and hope it works. Agile would seem to encourage this behavior with its short cycle times, but all Agile methodologies include the fallback of “we mis-estimated, this has to be pushed to the next cycle.” If you can deliver three good features instead of four shoddy ones, that's a Good Thing, and it keeps “prototype” code at bay.

Unfortunately, there are a lot of companies that want to adopt Agile processes but can't let go of their hard deadlines and “required” feature lists. But that's a topic for another post.

Saturday, May 15, 2010

Cleverness

Delta kitchen faucets have a very clever quick-release attachment for their sprayer hose. It uses a couple of pieces of plastic and a spring clip to hold the hose onto a special fitting on the underside of the faucet, with a bushing to make a water-tight seal. To install the hose, all you have to do is slide it over the fitting until you hear it click into place. I'm sure that I appreciated that cleverness five or six years ago when I installed the faucet; it's always painful to deal with threaded fittings under the sink.

Unfortunately, I don't remember much about installing the faucet (trauma can have that effect). So when the hose started leaking a couple of weeks ago, I picked up a generic replacement hose at Home Depot, grabbed a flashlight and basin wrench, and threaded my body around the garbage disposal. And spent nearly half an hour trying to figure out how to remove the hose, with its non-standard attachment mechanism. And then spent another hour driving to Home Depot to find an adapter, only to be told that they didn't have one and I'd have to call Delta. I dreaded this call, but ended up with a very pleasant support rep, who put in an order for a free replacement.

When the new hose arrived, it only took a few minutes to install. Yet I can't help but thinking that this special fitting actually cost me several hours, versus the fifteen minutes for a standard threaded fitting. Not to mention a week's worth of annoyance, wiping up leaked water every time I used the faucet.

The lesson for software development should be obvious: clever tricks can save a lot of time in the short term. But unless you remember how they work, they'll probably cost you — or someone else on your team — more in the future.

Thursday, April 15, 2010

intern() and equals()

A lot of programmers have the belief that String.intern() can be used to speed up if-else chains. So instead of writing this:

public void dispatch(String command)
{
    if (command.equals("open"))
        doOpenAction();
    else if (command.equals("close"))
        doCloseAction();
    else if (command.equals("calculate"))
        doCalculateAction();
    // and so on
}

they write this:

public void dispatch(String command)
{
    command = command.intern();
    if (command == "open")
        doOpenAction();
    else if (command == "close")
        doCloseAction();
    else if (command == "calculate")
        doCalculateAction();
    // and so on
}

Their logic seems to be that it's faster to compare integers than to compare strings. And while the only way to know for sure is a benchmark, I'm willing to bet that the intern() approach is actually slower (except, of course, for carefully constructed examples). The reason being that string comparisons are quite fast, while intern() has to do a lot of work behind the scenes.

Let's start with a string comparison, since it's available in the src.zip file that comes with the JDK (in this case, JDK 1.6):

    public boolean equals(Object anObject) {
    if (this == anObject) {
        return true;
    }
    if (anObject instanceof String) {
        String anotherString = (String)anObject;
        int n = count;
        if (n == anotherString.count) {
            char v1[] = value;
            char v2[] = anotherString.value;
            int i = offset;
            int j = anotherString.offset;
            while (n-- != 0) {
                if (v1[i++] != v2[j++])
                return false;
            }
            return true;
        }
    }
    return false;
    }

Like all good equals() implementations, this method tests identity equality first. Which means that, intern() or not, there's no reason to get into the bad habit of using == to compare strings (because sooner or later you'll do that with a string that isn't interned, and have a bug to track down).

The next comparison is length: if two strings have different length, they can't be equal. And finally, iteration through the actual characters of the string. Most calls should return from the length check, and in almost all real-world cases the strings will differ in the first few characters. This isn't going to take much time, particularly if the JVM decides to inline the code.

How about intern()? The source code for the JDK is available, and if you trace through method calls, you end up at symbolTable.cpp, which defines the StringTable class. Which is, unsurprisingly, a hash table.

All hash lookups work the same way: first you compute a hashcode, then you use that hashcode to probe a table to find a list of values, then you compare the actual value against every value in that list. In the example above, computing the hashcode for any of the expected strings will take as many memory accesses as all of the comparisons in the if-else chain.

So if intern() doesn't speed up comparisons, what is it good for? The answer is that it conserves memory.

A lot of programs read strings from files and then hold those strings in memory. For example, an XML parser builds up a DOM tree in which each Element instance holds its name, namespace, and perhaps a map of attributes. There's usually a lot of duplication in this data: think of the number of times that <div> and href appear in a typical XHTML file. And since you've read these names from a file, they come into your program as separate String instances; nothing is shared. By passing them through intern(), you're able to eliminate the duplication and hold one canonical instance.

The one drawback to String.intern() is that it stores data in the permgen (at least for the Sun JVM), which is often a scarce resource (particularly in app-servers). To get the benefits of canonicalization without risking an OutOfMemoryError, you can create your own canonicalizing map.

And if you'd like to replace long if-else chains with something more efficient, think about a Map of functors.

Tuesday, April 13, 2010

Book Review: Coders at Work / Founders at Work

I just finished reading Coders at Work. I received it as an early Christmas present, and while I've read several other books in the interim, getting to the end was a struggle. This is in sharp contrast to its companion volume, Founders at Work, which I bought last summer and read over the course of a week.

Both books consist of interviews with more-or-less well-known people. In Founders, these range from Steve Wozniak, talking about early days at Apple, to James Hong, who found himself dealing with the viral growth of Hot or Not. In Coders, the interviews range from Simon Peyton Jones (the creator of Haskell) to Donald Knuth (who should need to introduction). All of whom have fascinating histories.

So why did I like one book and not the other? At first I thought it was because I was more familiar with the programmers' stories, particularly those who entered the field at the same time I did. In comparison, the founders' stories were new to me: the challenges of dealing with a viral website, the hunt to find funding. Those stories seemed particularly relevant to me at the time, given that I had just left my full-time job with the thought of founding a software business.

But as I plowed through Coders, I realized that the difference was in the interviewers, not the interviewed. Jessica Livingston, the author of Founders, seemed to let her interviewees go wherever they wished: Woz, for example, took three pages to describe how he got the Apple floppy drive to work. Peter Seibel, by comparison, seemed to have a set list of questions, and forced each interviewee to respond to those questions. At one point I thought of turning “literate programming” into a drinking game, but realized I would be too drunk to ever finish the book.

This approach not only made the interviews unfocused, it made them long. If you put the two books side by side, they appear to be the same size. That's misleading: Founders is 466 pages, while Coders is 617. More important, the former has 33 interviews, the latter only 15. It's easy to read 15 pages in a sitting, but you have to plan for 40+ — or put the book down halfway through and try to regain context when you pick it up.

Bottom line: if you want to learn about the history of our industry, both books are a good choice. If you want to enjoy the process of learning, stick with Founders.

Tuesday, April 6, 2010

The Trouble with Wiki

If the comments don't match the code, they're both wrong.

I've been thinking about this maxim a lot. I recently started a new job, and have been busy learning the codebase. One of the tools that I've been using is the corporate wiki, and once again I'm faced with “wiki rot”: pages that are no longer relevant, or have broken links, or were started with the best of intention and then abandoned. I suppose I should feel lucky: I know one company that stored status reports in their wiki; the chance of a search returning anything useful at that company was pretty low.

Don't get me wrong. I think that wikis were a great invention. Wikipedia is my first stop for answers on a wide range of topics. And I learned much of what I know about patterns and Agile methodology from Ward's wiki (although by the time I started reading, much of the original community had seemingly vanished, replaced by a lot of insecure whiners).

At its best, a wiki gives you a place to write notes that can be shared within a team. It's a natural transition from the “project notebook” that Brooks recommended in The Mythical Man Month. But as long-term project documentation? Not so good.

Projects evolve, and all too often wiki pages don't. After all, the people who spend their days working on a project know how it works. They know the environment variables required for a successful build, and the configuration needed for a successful run. And when new people join the project, it's easier (once you realize the wiki is wrong) to just ask someone. And no-one has time to update the wiki …

What's to be done? Wikipedia has one answer: a large community of people who passionately update pages. But the key word there is “large.” In a corporate environment, you simply don't have this community. And if you have a corporate practice of putting everything into the wiki, it's not going to be useful even if you do have a community of dedicated maintainers.

I think that a solution for corporate wikis starts with organization by project or product rather than organizational unit. This change is subtle, meant to tap into the developers' feelings of craftsmanship: “these pages are the public face of my work.” An empty project wiki stares team members in the face in the same way “fixme” comments do in code — admittedly, that doesn't work for all teams.

A second, perhaps more important step is to archive all pages that haven't been edited in the last year. Keep them available, but not easily available: don't show them in basic searches, and highlight any links from other pages. In the best case, this will prompt team members to keep their pages up-to-date. In the worst, it will keep casual users from wasting time with incorrect documentation — which is valuable in itself.

Friday, March 26, 2010

Distributed Version Control

I've been using Mercurial for the past couple of weeks: I'm working on the sort of small, “skunkworks” project that doesn't belong in the company's main version control system. Normally I'd create my own Subversion repository to hold it. In this case, I thought my coworkers might want to run and modify it (which they did), so distributed version control (DVCS) seemed like a good idea. I picked Mercurial primarily because Git confused me when I tried it. Mercurial has a simple interface, with many of the same commands used by Subversion, and the built-in web-server is nice for distributing changes.

Before continuing, I should say that I'm a Luddite when it comes to version control. It's not that I don't like using a VCS — indeed, I think that any team or project that doesn't use some form of version control is on a short road to unhappiness. But I'm not one to jump to the newest and shiniest VCS. I was using SCCS until long after CVS became available (working directories? we don't need no stinkin' working directories!), and it wasn't until this year that I converted the last of my personal CVS repositories to Subversion (I kept forgetting which commands went with which projects).

So I'm naturally reluctant to jump on the DVCS bandwagon. But more than that, I don't see a clear use case for a typical corporate development team.

There are definitely some situations where DVCS is appropriate: my skunkworks project is one. Linux development is another. But the reason that DVCS is appropriate in these situations is because everyone's off doing their own thing. In the case of Linux development, you have thousands (?) of individual developers making little tweaks; if those tweaks are perceived as a Good Thing, they'll eventually make their way into the (central) repository owned by Linus. On the other hand, you have dozens of distributions, all pulling from that repository and adding their own pieces. Coordination is ad hoc; development is largely (to borrow a phrase) a Team of One.

In a corporate environment, however, you don't often have people doing their own thing (unless you have a very dysfunctional team). Instead, everyone is working on (possibly overlapping) pieces of a common codebase. To me, this implies that you want one “authoritative” copy of that codebase: you don't want to guess who has the “best” copy on any given day. And once you admit the need for some level of centralization, the checkin-merge-push cycle demanded by a distributed VCS seems like a lot of effort compared to the update-checkin cycle of a centralized VCS.

“But I can work on a plane!” I've actually heard that argument. At the time, my response was “I didn't think you traveled that often.” But a better response is “why do we care about your commits?” And that's the main point of this post.

“Check-in early, check-in often” is a great motto to live by. Especially early in a project, when you're experimenting. It's nice to be able to roll back to what you were doing an hour ago, or roll forward some pieces you thought you didn't need but now realize you do. If you use continuous integration, you'll check in at least daily.

However, frequent checkins create a quandary: once your time horizon moves past a few hours, you don't want all those commit messages in your repository! Think about the last time that you looked at a repository log: why did you do it? Probably to find when and why a particular file changed; maybe several changes, widely spaced in time. I've done that a lot, and stumbling over dozens of minor checkins (even after using annotate) is a royal pain.

One way to work around this issue (when it's even considered an issue) is to use “feature branches”: all new development takes place on a branch, and then gets merged back to the trunk when completed. This is the way that I learned how to do multi-person source management some 15 years ago, using ClearCase. Unfortunately, many teams are scared by feature branches, or perhaps they're scared of the merge at the end, so they stick with the “CVS” approach of doing all development on the trunk, with branches confined to bugfix duty. And at the other extreme, standard DVCS procedure is that the repository is the branch — in other words, once you've pushed, your trunk has a bunch of undifferentiated revisions.

My alternative is something I call “ghetto DVCS”: cloning a central repository, and working in the clone. When you're done, you copy your changes into a working directory, and check them into the “real” repository. Ironically, I started doing this while I was traveling, and wanted to use source control without access to the central repository. I decided that I liked it enough to continue even when I had wasn't traveling. For example, Practical XML revision 87 represents close to two weeks worth of work, during which I added, deleted, or renamed files many times. If I had been working directly in the central repository, there would be a dozen or more revisions.

The problem with my approach is that it can be a lot of work, particularly if you rename existing files. And this is where I think DVCS offers promise: since they track the ultimate parent revision for each file in your local repository, they could easily merge the tip with this parent and ignore the interim revisions (and maybe Git can do this; as I said, it confused me, but tip merges make a lot of sense for the Linux development model).

Until that happens, I'm sticking with svn.


Update: my friend Jason pointed out that git pull has a --squash option. Which makes a lot of sense, given the pull-based nature of the Linux development model. It's not the "push tip" that I want, but is closer. Now I just have to figure out how to use the rest of git.

Monday, February 22, 2010

Problems and Opportunities

There's no such thing as problems, only opportunities

There may be people and situations for which this is true: tow-truck operators and plumbers come to mind. But for the rest of us, it's critically important to tell the difference — in fact, that is the difference: criticality. Opportunities are optional, problems aren't.

I'm not sure where the “no problems” slogan came from, but I suspect a half-asleep MBA student in ECON 101. In economics, there truly are no problems, nor opportunities either, only choices. But every choice has an “opportunity cost,” which, confusingly, is the net profit from the best alternative. If you (like many business students) don't think of the possibility that net profit can be negative, it's an easy step from cost to slogan.

Houston, we have an opportunity

In the real world, however, net profit can be negative. And when you're a quarter million miles from earth, with limited oxygen and electricity, the opportunity cost of the wrong choice is death. In the computer industry, we're not likely to face such drastic costs; but that doesn't mean we don't have problems.

Wednesday, February 17, 2010

JavaFx: What Niche does it Fill?

I've had fun playing around with JavaFx. I can't shake the feeling that it's in the same situation as Java was in 1996: targeted to a specific niche, but much more useful as a general-purpose language. Unfortunately, everything I've seen says that Sun (Oracle) intends to keep JavaFx in the RIA niche, a competitor to Flash and Silverlight.

And if that's their plan, here's my response: ain't gonna happen. Flash got to its position of ubiquity not so much because of technical merit, but because Adobe has mindshare among the graphical designers who are — like it or not — the arbiters of what goes on the web. At best, these people think of Java as “something to do with JSPs”; at worst, they think of applets (or more likely, negative folklore about applets, because they've never actually seen one). JavaFx is as likely to replace Flash as The Gimp is to replace Photoshop.

More important, Flash itself may not have much longer to run: there seems to be a strong backlash against plugins. Apple, with the iPhone and iPad, is the leading edge of this movement. However, the same attitude is showing up in desktop browsers, as you can see from the many pages describing how to make Java work with Chrome (I spent a half hour trying to get the latest Sun JRE to work with Chrome on Linux, but eventually gave up; call this statement sour grapes if you wish).

Instead, I think RIAs will be built with HTML (5) and JavaScript. This is, again, something very familiar to web designers. Most seem to go the route of raw HTML plus a library such as JQuery, but libraries the Google Web Toolkit and Yahoo UI will probably be more useful to the application (versus web) developer.

But, what about the mobile market? After all, the JavaFx tagline is “all the screens of your life,” and they've certainly made an effort to create one platform to span desktop and mobile applications. However, as you dig a bit deeper, you find that mobile device support is not quite ready for prime time: “Sun is working with Mobile Device Manufacturers and Mobile Operators to enable out of the box support for JavaFX.” What of the billions of Java-enabled handsets already out there? It appears that they're out of luck (although I haven't actually tried running a JavaFx app on, say, a Motorola RAZR). And, just in case you missed it, JavaFx will never find its way to the millions of iPhone users.

To wrap up: I believe JavaFx has a bleak future if it's confined to the RIA and/or mobile markets. But those aren't the only markets out there, and there's no reason that JavaFx has to remain confined. As I wrote when I started this blog, I like Java because it has a simple yet very powerful mental model, and the JavaFx language seems to have the same characteristics. It encourages a functional style of programming while still being object-oriented. There are, of course, other languages that do the same thing; Scala, for example. But those other languages don't have the Oracle marketing department behind them.

There's a lot of discussion about how Java is to evolve. But language evolution is a tricky thing. I could easily do without the mess that is auto-boxing; other people feel the same way about generics. And C has remained essentially unevolved since the 1970s, yet is still very useful (albeit in limited niches).

So I'll end with a question: will Java continue to acquire cruft, or is JavaFx the future?

Tuesday, February 16, 2010

JavaFx: Threading

If you're thinking of writing server-style applications with JavaFx, threading appears to be a problem. The first hurdle is that JavaFx programs run on the AWT event dispatch thread — which makes sense for a GUI. Unfortunately, server applications often run in a “headless” environment, and there are some AWT classes that shouldn't require a graphical display yet still like to throw if they don't have one. Which means that, if some of those classes are buried in the JavaFx runtime, you won't be able to run server applications period.

Assuming that you get past the headless issue, it shouldn't be too difficult to create a background thread (either by creating a thread directory, or via javafx.async.JavaTaskBase) and start a Java server application such as Tomcat in that thread (I say “should” because I haven't tried this). That Java application could then invoke non-GUI JavaFx classes, perhaps one that implements javax.servlet.Servlet, as I described in the previous post.

However, I think there are dragons lurking here. The only reason that you'd want to implement a servlet via JavaFx is to exploit language features such as data binding. Looking at the generated bytecode, however, I haven't found any synchronization and I've seen a lot of static members. In a multi-threaded environment like an app-server, this seems to be an invitation for data corruption.

I'm also somewhat disappointed at the threading support for writing GUI apps. As I've written elsewhere, long-running operations should execute outside of the event dispatch thread. The javafx.async package provides two ways to run such code: the first is JavaTaskBase, which as its name indicates, is meant to run code written in Java (and said code is not allowed to invoke JavaFx classes). The second approach is via Task, which is not terribly well documented (“does not define how to specify the code to run”), and appears to be meant for internal use by API classes such as javafx.io.http.HttpRequest.

Speaking of javafx.io.http.HttpRequest (which is not found in the provided src.zip): I ran the example provided in its class documentation, with the addition of printing the current thread for each of the callbacks it defines. And discovered that the callbacks were not executing on the event dispatch thread, so shouldn't be allowed to update GUI components. As I said, there are some dragons here.

On the positive side, when I look at the provided source code, I note that Brian Goetz (author of Java Concurrency In Practice) is listed in the @author tag for many classes. I can only hope that this association is ongoing, and that he gives some thought to JavaFx concurrency in practice.

Monday, February 15, 2010

JavaFx: Integrating with Java

JavaFx applications run on the JVM, which means that they must use class file formats that are compatible with those produced by the Java compiler. Which in turn means that JavaFx code should be able to call Java code and vice-versa. And, in fact, JavaFx to Java integration is easy:

import java.lang.Math;

var a = 12;
var b = 13;
var c = Math.min(a, b);
println(c);

var d = "this is a test";
var e = d.replaceAll("a test", "an example");
println(e);

Because of type inference, variables a and b are known to be integers, which are represented internally as int (not, as you may expect, as Integer — although that's no doubt an implementation detail and subject to change). This means that they can be passed directly to the Java method Math.min(). Similarly, variable d is known to be a java.lang.String, so you can invoke instance methods defined by that class.

Calling from Java into JavaFx is a little more difficult. Executing a script from a Java method is not practical: there are a number of methods that you must invoke, and the specific names and call order is undocumented. Script functions are compiled as public static methods, so you could theoretically import the script class and call these directory. However, any non-trivial script function would rely on script variables, which brings us back to the issue of setting up context.

JavaFx classes, however, are compiled as public nested (static inner) classes of their defining script. Which means you can instantiate a JavaFx class from within Java code. However, all of the instance variables will be set to default values — and the setter method names are, of course, implementation details and subject to change. Worse, NetBeans doesn't recognize that the compiled class exists, and flags your Java code as an error; you have compile manually (I didn't try Eclipse).

That leaves instantiating a JavaFx class from within JavaFx code, and passing it to a Java class for use. This is surprisingly easy, although convoluted. You first create a Java interface:

public interface CallbackIntf
{
    public void callback(String param);
}

A JavaFx class can then implement that interface:

class MyCallee
extends CallbackIntf
{
    override function callback(param : String)
    {
        println("callback invoked: {param}");
    }
}

Next, you instantiate your Java object from within the JavaFx script (in this example, Callback2), and then you can call a method on that object. Incidentally, note that we use new to instantiate the Java object, and can pass arguments into its constructor.

var caller = new Callback2("test");
var callee = new MyCallee;
caller.makeCallback(callee as CallbackIntf);

Convoluted, yes, but it offers promise: many framework objects (eg, Servlet) are specified as interfaces. If you're writing apps that could benefit from JavaFx language features (such as data binding), it's theoretically possible to do so and deploy into a Java framework. In practice, however, I don't think it will work. Threading is one reason, and that's the subject of my next post.

Friday, February 12, 2010

JavaFx: Building a GUI

JavaFx is targeted toward building graphical applications, and you'll find plenty of tutorials that walk you through such apps. Given this focus, I felt I should at least touch on GUI features.

Building the Containment Hierarchy

In a traditional Swing application, you start with a top-level window (a JFrame, or JApplet, or JDialog), and programmatically build out a “containment hierarchy” of components. For example, you'd build your main menu by instantiating a JMenuBar, then instantiating JMenu instances and adding them to it. For each JMenu instance, you'd then instantiate and add JMenuItem instances for the various menu choices. All Swing programs have a similar structure, which means there's a lot of boilerplate code. You could theoretically define the entire containment hierarchy as, say, an XML file, and eliminate the boilerplate. However, in practice you need to link together components and apply rules: for example, a menu item that should only be enabled when there's data for it to process.

A JavaFx application, however, is a step closer to declarative GUI design: the containment hierarchy (JavaFx calls it a “scene graph”) can be specified as a nested object initializer:

Stage {
    title: "Hello",
    width: 250,
    height: 80,
    scene: Scene {
        content: [
            Text {
                font : Font { size : 16 },
                x: 10,
                y: 30,
                content: "Hello, World"
            }
        ]
    }
}

The top level of the scene graph is the Stage. Unlike Swing, which uses different top-level classes for stand-alone applications versus applets, JavaFx uses Stage everywhere. As with other JavaFx GUI classes, Stage presents a much simpler interface than its Swing equivalents: a single scene for content, and a couple of callbacks for high-level events such as closing the window.

There are a couple of things that may not be obvious from this example. First: because the scene graph is constructed using an object initializer, you can extract parts of the initialization into variables — even bound variables:

var curText = "Hello, World";

Stage {
    // ...
                content : bind curText,
    // ...

The other important thing is this: creating a Stage creates and displays the window representing that stage. Swing, by comparison, separates construction and display. In the example above, we called the constructor without assigning the result to a variable. For simple applications, this is all you'll normally do. For more complex applications, you may construct multiple Stages, or hold onto them via variables so that you can modify them while the program is running.

Layout

Layout is one of the biggest pains of Swing programming. The Swing layout managers are all based on the idea that you need to position components mathematically, in order to deal with differing display resolutions and/or user actions (such as changing the size of a window). The result, more often than not, is an ugly UI. Or, alternatively, a custom layout manager that better suits the goals of a UI designer (I have several almost-complete articles on Swing layout, including a how-to on writing a layout manager; keep watching my main website feed to see when they appear).

JavaFx provides only a few, very basic layout managers. Most of the examples and tutorials use explicit layout: you provide the X and Y position of the object, and if needed, its width and height. There's also an option to extract layout information into an external CSS file. I haven't tried this, and am not certain why you'd do this; perhaps to deal with displays that have differing resolutions?

Interaction

JavaFx responds to user actions just like any other GUI environment: it invokes callbacks. If you've spent much time writing Swing ActionListeners, you'll probably like the fact that JavaFx supports first-class functions that you can specify as part of declarative initialization. If you've spent much time with JavaScript, you'll probably hate the fact that those functions are strongly typed, meaning that you have to specify parameters that you may never use.

JavaFx has a much simpler set of actions than Swing, and you can only attach a single function to a particular action. But be honest: how often did you actually use a list of listeners in Swing? Or intercepted Document events rather than just calling getText() on a field?

As I mentioned in my previous post, I think that bound fields will eliminate a lot of the code inside an action. As an example, consider a name/address form, along with a button that writes the form's contents to a database. In a traditional Swing app, that button's action handler would have to explicitly retrieve the contents of every field on the form, use those values to populate an object, then pass that object on to the database. With JavaFx, you could bind the fields of the object to the fields on the form, and the action would simply submit the already-populated object (although if the object's fields remained bound, the user could continue to update them — perhaps JavaFx could benefit from a value-based copy constructor).

CustomNode

A scene graph is a tree of javafx.scene.Node objects, with the Scene at the top, and various combinations of grouping and rendered components under it. The CustomNode class allows you to create new combinations of components that occupy a place in that hierarchy. For example, a field object that consists of both a label and a text field, with consistent sizing.

Note that I said “create”: one of the anti-patterns of Swing programming is to subclass JPanel just to add components to it (that anti-pattern is the topic of another forthcoming article on my website). When you subclass CustomNode, you are actually creating a controller class. You must override the abstract function create() to build out the node's containment hierarchy.

Graphics and Animation

JavaFx provides a simplified interface to the Java2D library. In Swing, if you want to access the drawing primitives, you must subclass a Swing component (usually JPanel) and override its paintComponent() method. JavaFx gives you components that already do this; you simply create a node in the scene graph. You specify the object's position, size, stroke, fill, and transforms in its initializer.

Graphical objects and data binding come together in JavaFx animation. Here's a simple countdown timer, with explanation to follow:

var counter = 10;

Stage {
    title: "Countdown",
    width: 100,
    height: 40,
    scene: Scene {
        content: [
            Text {
                font : Font { size : 18 },
                textOrigin: TextOrigin.TOP,
                x: 60, y : 10,
                content: bind counter.toString();
            }
        ]
    }
}

Timeline {
    keyFrames: [
        at (10s) {
            counter => 0 tween Interpolator.LINEAR;
        }
    ]
}.play();

The scene graph is simple: a single text field. Its content is bound to the variable counter; every time that variable gets updated, the text field will be updated as well. Where this program gets interesting is the Timeline object, particularly its keyFrames property.

A Timeline is like a javax.swing.Timer on steroids. You tell it the desired value for one or more variables at fixed points in time, along with a way to interpolate between the starting value and the desired value, and the Timeline will make that happen. Here I specify that, at 10 seconds after the timeline starts, the value of counter should be zero, and that it should change linearly. I start with the value 10, the Timeline constantly invokes the interpolator to compute the “current” value, and then sets the specified variable (which is in turn bound by the text field).

Animation is considered such a big deal that the at operator is a part of the language. As far as I can tell, all that it does is instantiate a KeyFrame object. Unfortunately, the Animation chapter of the Language Reference is currently empty. As a result, figuring out animations is a matter of looking at examples and experimenting.

Profiles

One of the problems with J2ME development is that you don't have the full Java environment: there's no floating point, and a very limited subset of the API. JavaFx attempts to rectify this, but even so, its API breaks out into two categories: common and desktop. The desktop profile provides OpenGL graphical effects (the entire javafx.scene.effect package), as well as the ability to use Swing components.

This Isn't Your Father's GUI

Looking through the API docs, I was struck by what's missing: there aren't any menus. Nor tables. This is intentional: as I learned from a HanselMinutes podcast, the JavaFx team is not trying to build the next toolkit for corporate applications. Indeed, looking at what they've done, I hear “mobile app”: a simplified UI, with flashy controls, designed to fit in a small space. Which is not a bad thing.

Wednesday, February 10, 2010

JavaFx: Bound Variables

One of the things that I find most interesting about the JavaFx language is binding: you bind a variable to an expression, and the value of the variable changes whenever the expression changes:

var a = 12;
var b = bind 2 * a;

println(b);
a = 17;
println(b);

In this example, the value of b starts out as 24 — 12 * 2. However, when a is updated, the value of b changes: it's now 34.

Binding is implemented using an Observer pattern, rather than lazy evaluation: b is recalculated at the time a is changed (the compiler attaches a “listener list” to all variables involved in a bound expression). Functional purists may be muttering under their breath at this point, but this approach does allow a couple of interesting features.

First, it means that the runtime does not need to completely re-evaluate a bound expression: the compiler memoizes sub-expressions, and only recomputes those that depend on the changed value. This can be useful if the bound expression includes a long-running function:

function longRunningFunction() {
    println("longRunningFunction() invoked");
    2;
}

var a = 12;
var b = bind a * longRunningFunction();
println(b);

a = 13;
println(b);

Second, it means that you can create bidirectional binds, in which changing the value of either variable will change the other. This can be useful if you have two linked input fields:

var x = "foo";
var y = bind x with inverse;

println("before change, x = {x}, y = {y}");
y = "bar";
println("after change, x = {x}, y = {y}");

Binding isn't limited to discrete variables. You can pass a bound expression into a class initializer, and that particular instance of the class will be updated as the expression changes:

class MyClass {
    var x : Integer;
}

var a = 12;
var b = MyClass { x : bind a };
var c = MyClass { x : a };

println("before change: b={b.x}, c={c.x}");
a = 13;
println("after change: b={b.x}, c={c.x}");

When the class is a GUI object, binding becomes a low-effort way to implement MVC: simply bind the relevant fields of the GUI object (view) to the relevant fields of the model object. Having written more than my share of event listeners and active models, this approach seems very civilized.

Binding does, however come with a performance hit: I wouldn't want to bind to a variable that will be updated frequently. And I can imagine a program filled with “spaghetti binds” that are impossible to untangle. It's like any new feature: “with great power comes great responsibility.” I'm sure we'll go through a phase of gratuitous binding, but once programmers get that out of their system we'll have programs that are easier to maintain.

Tuesday, February 9, 2010

JavaFx: Interesting Language Features

Today's post takes a look at a few of the JavaFx language features that I find interesting. It's by no means an exhaustive rundown of the language; for that, take a look at the language reference (I expect this link to change at some point in the future — right now it points to a build system). And the most interesting feature, binding, will be the topic of its own post.

This is going to be a long post, so get a cup of coffee.

Scripts

A JavaFx source file is called a “script,” and looks a lot like a JavaScript source file: unstructured, with variable definitions, functions, and inline executable code all jumbled together. But, as I noted earlier, this “script” is actually compiled into a Java class.

This class definition has a lot of boilerplate in it to bridge the gap between features of JavaFx and JVM bytecode. All of this boilerplate is, of course, implementation dependent and subject to change. One piece is the static method javafx$run$(), which serves the same function as Java's main(); the javafx program looks for this method when it starts a JavaFx application.

Another implementation detail, and one that I think is a little disturbing, is that all of the script's variables and functions are represented as static class members. I suspect that this detail is driven by the idea that scripts a primarily used for initialization of the GUI, and most action takes place in classes.

Class Members and Initialization

Unlike Java, but like JavaScript, classes are happy to expose their internals: the language includes access modifiers, and the language reference indicates that the default is script-level access, but the generated code says that everything is public.

Rather than providing constructors, you initialize instances using an “object literal” notation that's very similar to JavaScript's object literals:

class MyClass {
    var a : Integer;
    var b : String;
}

var z1 = MyClass { a: 123, b: "456" };

There's no new! And I should mention that JavaFx is very liberal with regards to delimiters in an object initializer: here I separated my initializers with a comma, which follows the JavaScript practice. However, I could have used a semi-colon, or simply omitted the delimiter altogether; the JavaFx compiler would recognize where each item ended. I still haven't figured out all of the rules, so simply follow Java practice: semi-colons at the ends of statements, and commas between items in a list.

Mixin Inheritance

The Java inheritance rules are imposed by the JVM: a given class may have only one superclass, but may implement any number of interfaces. JavaFx appears to bend the rules : you can create “mixin” classes, similar to Scala traits, and a JavaFx class can extend more than one mixin:

mixin class Foo {
    var x : Integer;
}

mixin class Bar {
    var y : String;
}

class Baz
extends Foo, Bar {
    override function toString() : String {
        "[{x},{y}]";
    }
}

var a = Baz { x : 123, y : "456" };
println(a);

Of course, JavaFx doesn't break the rules: the JavaFx compiler translates mixins into composition. The Baz class actually holds instances of Foo and Bar, and has getter and setter methods that delegate to those instances. Incidentally, you won't find mixins in the current language reference; you need to go to the tutorials.

This example shows one additional feature of inheritance: you have to explicitly override member variables and methods. Here we implement a custom toString() method. Since that method is already defined by java.lang.Object, which is the ultimate ancestor of any JavaFx class, we need to mark it as explicitly overridden. As far as I can tell, this is simply a means of error-checking, similar to Java's @Override annotation; JavaFx does not impose “final unless defined otherwise” semantics.

Variables, Definitions, and Type Inferencing

In the class examples above, you saw the canonical variable declarations:

var NAME : TYPE;

At first glance, this appears to be a case of the language designers wanting to look like ActionScript and not Java; it's semantically equivalent to a Java variable declaration, which puts the type first. However, JavaFx also supports type inferencing: you can declare and initialize a variable at the same time, and the compiler will figure out what type you're using:

var a = "foo";
var b = a;

For people tired of constantly repeating long variable declarations in Java, type inferencing is a godsend. For people used to JavaScript, which doesn't have strong typing, they may find type inference annoying:

var a = "foo";
a = 123;        // this won't compile

To close out the discussion of variables, I'll note that JavaFx also provides the def keyword, which is similar to Java's final modifier:

def x = 123;
x = 456;        // won't compile

I'm dubious regarding the value of def. With multi-threaded programming, it's very useful to declare that a “variable” is actually immutable. Scala, for example provides the val keyword to do this. However, the JavaFx def keyword does not imply immutability: it can reference a bound expression, which may change during runtime. Until the language matures, and develops truly different semantics for def versus var (and I'm not sure what those might be), I plan to stick with var for all variables.

Blocks Have a Value

Most “curly brace” languages use those braces to make a group of statements syntactically equivalent to a single statement. For example, in Java:

for (int ii = 0 ; ii < 10 ; ii++)
    System.out.println("this is in the loop");
System.out.println("this isn't");


for (int ii = 0 ; ii < 10 ; ii++) {
    int zz = 12 * ii;
    System.out.println("this is in the loop");
    System.out.println("as is this");
}
System.out.println("this isn't");

Blocks also introduce a new scope for variables: in the example above, the variable zz is only accessible from code within the block. In JavaFx, blocks have an additional feature: the last expression in the block becomes the value of that block:

var z = {
    var x = 10;
    var y = 20;
    x + y;
};

Interesting, but what's the point? Well, consider that in JavaFx the the if keyword introduces an expression, not a statement. So you get the following formulation of a ternary expression:

var x = 12;
var y = if (x < 20) {
            var z = x / 2;
            z - 17;
        }
        else {
            23;
        }

And not only is if an expression, but for is as well. Which is particularly useful when generating sequences (the JavaFx equivalent of an array):

var seq = for (x in [1..5]) {
              [x, x*x, x*x*x];
          }

Explaining that last example would be a blog posting on its own. I suggest however, that you try it out if you've downloaded a development environment.

Closures

Closures are a hot topic in modern programming languages. As C programmers have known for decades, function pointers are an incredibly powerful tool (Lisp programmers have known this for longer, but such knowledge is indistinguishable from their general smugness). A big benefit of function pointers is that they let you abstract away a lot of boilerplate code, particularly in the area of resource management. For example, here's a typical piece of Java database code:

        Connection cxt = DriverManager.getConnection("url");
        PreparedStatement stmt = null;
        try
        {
             stmt = cxt.prepareStatement("sql");
             ResultSet rslt = stmt.executeQuery();
             // process result set
        }
        catch (SQLException ex)
        {
            // do something with the exception
        }
        finally
        {
            if (stmt != null)
                stmt.close();
            if (cxt != null)
                cxt.close();
        }

That boilerplate try/catch/finally must be repeated for every single SQL statement that you execute. And as written, it has a bug: the call to stmt.close() can throw, in which case the Connection will never be closed. If you cache PreparedStatements, the code becomes even more complex. With first-class functions, you can implement the boilerplate in a DatabaseManager class, and define a function that just processes the result set:

var dbManager = DatabaseManager { /* config */ };
dbManager.executeQuery(
            "sql",
            function(rslt:ResultSet) {
                // process results here
            });

In this example, we create a new function object as an argument to the executeQuery() method. This method can then call the function, passing in the ResultSet that it gets after executing the query. Once the method finishes, the function will be garbage-collected.

The example above is a bit abstract, so here's a simple compilable script:

var fn = function(x, y) {
    x + y;
}

var z = fn(12, 13);
println(z);

Here we define a variable, fn, that holds a function. We then execute the function, storing the result in variable z. We could have passed the function call directly into println(), and we could pass the function itself to some other function.

But wait, there's more! JavaFx implements true closures, where a function has access to the variables defined in its enclosing scope:

var a = 17;
var b = 23;
var c = function()
        {
            var t = a;
            a = b;
            b = t;
        };

println("before swap: a={a}, b={b}");
c();
println("after swap: a={a}, b={b}");

Personally, I don't particularly like closures. You could invoke c from any point in the code, perhaps in a completely different script; this is a great way to make your program behave in an indeterminate and hard-to-debug manner. In cases where you actually want the ability to update script variables from somewhere else, binding is probably a better option; I'll write about that tomorrow.

To close out this post, I want to note that JavaFx does not support lazy evaluation of functions. To my mind (if no-one else's), that's a big part of what makes a “functional” programming language: you can pass a first-class function to another function that expects, say, an integer, and the passed function will be evaluated when needed to provide the value.

Monday, February 8, 2010

Getting Started with JavaFx

As my last posting noted, I'm intrigued by the JavaFx language. It's being marketed as a platform for Rich Internet Applications (RIAs), in competition with Flash (Flex) and Silverlight. I don't think it's going to be successful in that space, for reasons that I'll detail in a future post. However, it might just be the next big language for the JVM platform, a functional-object language with the marketing muscle of Oracle behind it.

With that in mind, I'm taking some time to investigate the language, and will be writing a series of posts about it. I won't spend a lot of time on the GUI aspects; there are already blogs that do that, and a whole page of examples with source code. Instead, I want to point out some of the things that I think are neat about the language, and some of the roadblocks that might be in its way as a general-purpose environment.

There will be a few code snippets along the way. You can compile and run them from the command line, using the javafxc and javafx tools from the SDK. There's also IDE support for NetBeans 6.x and Eclipse. I use the NetBeans plugin, and it seems fairly solid (although there seem to be some issues with NetBeans on Ubuntu 9.04 — sometimes it stops accepting keystrokes until I switch windows). If you don't already have NetBeans, you can download it pre-configured with JavaFx, from the JavaFx downloads page (and you'll find the SDK there too).

Before diving into code, there are a couple of terms that have special meaning with JavaFx:

script
A JavaFx source file. Although called a “script,” these files must be compiled into Java bytecode, and run from within the JavaFx environment. Scripts are translated into top-level Java classes with predefined methods used to invoke the script (think of main()), along with the various methods, variables, and code defined by the script.
class
Much like a class in any other language: a definition that combines data and methods, and must be instantiated for use. JavaFx classes are defined within a script, and the compiler creates nested Java classes to represent them.

Now it's time for “Hello, World”:

package experiments.language;

println("Hello, World");

That's it. It really does look like a script: there's no boilerplate class definition, no public static void main definition. Even the println() statement gives no hint that it's actually implemented by the class javafx.lang.Builtins.

All of this, however, is simply syntactic sugar: you could easily write a precompiler that takes a similar script and adds the boilerplate Java code. I'll leave you with something new, however: inline variable substitution, a la PHP:

package experiments.language;

def addressee = "World";
println("Hello, {addressee}");

Thursday, February 4, 2010

Book Review: Essential JavaFx

I recently received a copy of Essential JavaFx as book give-away at my local Java Users Group meeting. The JUG receives such books directly from publishers, with the expectation that JUG members will write reviews. My review of this book follows; I suspect that Dave won't be sending it back to the publisher.


My bookshelf holds a dog-eared copy of The UNIX C Shell Field Guide, written by Gail and Paul Anderson in 1986. To me, it represents the perfect mix of tutorial and reference, presenting an enormous amount of technical detail in a careful progression. Unfortunately, I can't say the same for their book Essential JavaFx: the material is disjointed, skimming the surface of many topics in seemingly random order, without truly examining any of them.

This book begins, as nearly all programming books do, with a simple “Hello, World” example. This part is, in fact, a little too simplistic: it walks the reader through specific screens and buttons of the NetBeans IDE. The actual program appears only as a screen-shot, without explanation of its structure. For a book that is targeted to an audience with “some previous programming experience,” this is disappointing: I want to see the program, not the IDE.

Chapter 2 attempts to remedy that initial surface treatment, by presenting a relatively complex example application. And it starts well, with a description of the way that a JavaFx application is built around a “scene graph,” and the admonishment to “think like a designer [...] visualize the structure of your application.” But then, it dives into variable definitions and control constructs, without so much as mentioning how they fit into the scene graph. And from there, the same chapter touches on accessing Java classes, visual effects, lazy expression evaluation (not identified as such), and animations &mash; a few paragraphs or pages for each, without explanation as to how they might fit together in an application.

Chapter 3 is a whirlwind treatment of the core JavaFx language. And there is a lot to this language: it is far closer to Scala than to Java, with influences from Python. Indeed, I could envision an entire book exploring the effective use of these core language features, perhaps even ignoring the GUI.

The subsequent chapters are much the same: quick hits without real content. Chapter 6, “Anatomy of a JavaFx Application,” is the closest that we get to a step-by-step tutorial, yet even here the Andersons forget their early advice: rather than starting with the application structure, they immediately create component subclasses, and only later assemble them into a scene graph.

In the end, I put the book down and turned to my IDE and “experiential learning.”


So, not so great a review of the book. But the language looks fascinating, and I plan to spend time working with it over the next few weeks. My first impression is that it might in fact be “the next big language” for the JVM: it seems to be a nice combination of strong typing, declarative configuration, and functional/object execution. I'm wondering if, like Java before it, JavaFx might be introduced to the world as a GUI language, yet end up as a language for server-centric applications.

Friday, January 29, 2010

Micro-Optimization is Easy and Feels Good

Have you ever seen code like this?

private String foo(String arg1, int arg2)
{
    StringBuilder buf = new StringBuilder();
    buf.append(arg1).append(" = ").append(arg2);
    return buf.toString();
}

If you ask the person who wrote this why s/he used a StringBuilder rather than simply concatenating the strings, you'll probably hear a long lecture about how it's more efficient. If you're a bytecode geek, you can respond by demonstrating that the compiler generate exactly the same code. But then the argument changes: it's more efficient in some cases, and is never less efficient, so is still a Good Thing. At that point it's usually easiest to (snarkily) say “well, you should at least pre-allocate a reasonable buffer size,” and walk away.

These arguments are not caused by the time quantum fallacy. The StringBuilder fan recognizes that different operations take differing amounts of time, and is actively trying to minimize the overall time of his/her code. Never mind that, in context, string concatenation is rarely a significant time sink. The rationale is always “it can't hurt, and might help.”

This is the same attitude that drives curbside recycling. It feels good to put an aluminum can in the bin: the can can be melted down and reused, saving much of the energy needed to smelt the aluminum from ore. Doing so, however, avoids the bigger questions of whether aluminum is the most environmentally sound packaging material in the first place, or whether you should even be drinking the contents of that can.

In the same way, micro-optimizations let you feel good while avoiding the bigger question of what parts of your code should be optimized. Experienced programmers know that not all code should be optimized: there's an 80-20 (or more likely 90-10) rule at work. But finding the 10% that should be optimized is, quite frankly, hard.

It's not that we don't have tools: profilers of one form or another have been around since before I started working with computers. Java has had a built-in profiler since at least JDK 1.2, and a good profiler since (I think) 1.4. For that matter, some carefully placed logging statements can give you a good idea of what parts of your code take the longest (just be sure to use a logging guard so that you're not wasting CPU cycles!).

The real barrier to profiling — and by extension, intelligent optimization — is coming up with a representative workload. “Representative” is the key word here: you can write a performance test that spends all of its time doing things that your real users never do. Even when you think that you know what the users will do, it's tempting to take shortcuts: to profile an eCommerce application, you might add 10,000 items to a single cart. But the optimizations that you make after this test could be meaningless if the real world has 10,000 carts containing one item. And even if you get that right, there may be different implications running on an empty database versus one that has lots of data.

So yes, it's hard. But ultimately, the brain cycles that you spend thinking about representative workloads will have a greater payoff than those you spend thinking about micro-optimizations. And the end result feels better too, much like foregoing a can of soda for a glass of water.