Monday, December 23, 2013

Learning Go: Imports

I'm going to finish this series of posts with an interesting but in my opinion unfinished feature: third-party imports. Unless you like to reinvent wheels, your projects will depend on libraries written by other people. To help you do this, Go's import declaration allows you to reference packages stored in open source repositories.

For example, if you want to use the Go project's example square root implementation, you could write the following:

package main

import "fmt"
import "code.google.com/p/go.example/newmath"

func main() {
    fmt.Println("The square root of 2 is ", newmath.Sqrt(2))
}

If you just try to build this code, you'll get an error message indicating that the compiler can't find the package. Before you can use it, you have to retrieve it:

go get code.google.com/p/go.example/newmath

This is, to say the least, inconvenient: if you depend on multiple libraries, you have to manually retrieve each of them before you can build your own project. Fortunately, go get resolves transitive dependencies, and if your project is stored in one of the “standard” repositories you can leverage this feature to retrieve your project and its dependencies in one step. However, at least for GitHub projects, this technique doesn't clone the repository. If you want to make changes — or get updates from your dependencies — you need to manually clone.

A bigger problem is that there's no mechanism for versioning: you always get the trunk revision. If all of your libraries are backwards compatible, that may not be a problem. My experience suggests that's a bad thing to rely upon. In practice, it seems that most developers retrieve the libraries that they want to use, then check those libraries into their own version control system: creating a “locked” revision that they know supports their code.

As I said, I think the feature is unfinished. Versioning is a necessary first step, and should not be too difficult to add. But it's not enough. If you rely on retrieving packages from an open-source repository, you also rely on the package owner; one fit of pique, and your dependency could disappear. Along with versioning, Google needs to add its own repository for versioned artifacts, much like Maven Central, CPAN, or RubyGems.org. This could work transparently to the user: remote imports of public projects would be proxied through Google's server, and it would keep a copy of anything that had a version number.

Thursday, December 19, 2013

Learning Go: Goroutines and Multiple Cores

Here's a program that I've been using to explore Go concurrency. If you want to play along at home, you'll need to compile and run it locally; the Playground won't suffice.

package main

import (
    "fmt"
    "runtime"
    "syscall"
)

type Response struct {
    Received int
    Calculated int
    Handler int
    Tid int
}

func Listener(me int, in chan int, out chan Response, done chan int) {
    for val := range in {
        out <- Response{val, runCalc(me), me, syscall.Gettid()}
    }
    done <- me
}

func runCalc(num int) int {
    zz := 1
    for ii := 0 ; ii < 100000000 ; ii++ {
        zz += ii % num
    }
    return zz
}

func main() {
    fmt.Println("Main running on thread ", syscall.Gettid(), " numCPU = ", runtime.NumCPU())

    chSend := make(chan int, 100)
    chRecv := make(chan Response)
    chDone := make(chan int)

    listenerCount:= 8
    for ii := 1 ; ii <= listenerCount ; ii++ {
        go Listener(ii, chSend, chRecv, chDone)
    }

    messageCount := 100
    for ii := 0 ; ii < messageCount ; ii++ {
        chSend <- ii
    }
    close(chSend)

    for listenerCount > 0 {
        select {
            case data := <- chRecv :
                fmt.Println("Received ", data.Received, ",", data.Calculated, " from ", data.Handler, " on thread ", data.Tid)
                messageCount--
            case lnum := <- chDone :
                fmt.Println("Received DONE from ", lnum)
                listenerCount--
        }
    }

    fmt.Println("Main done, outstanding messages = ", messageCount)
}

The short description of this program is that it kicks off a bunch of goroutines, then sends them CPU-intensive work. My goal in writing it was to explore thread affinity and communication patterns as the amount of work increased. Imagine my surprise when I saw the following output from my 4 core CPU:

go run multi_listener.go 
Main running on thread  27638 , numCPU =  8
Received  0 , 1  from  1  on thread  27640
Received  1 , 1  from  1  on thread  27640
Received  2 , 50000001  from  2  on thread  27640
Received  3 , 100000000  from  3  on thread  27640
Received  4 , 150000001  from  4  on thread  27640
Received  5 , 200000001  from  5  on thread  27640
Received  6 , 249999997  from  6  on thread  27640
Received  7 , 299999996  from  7  on thread  27640
Received  8 , 350000001  from  8  on thread  27640
Received  9 , 1  from  1  on thread  27640
Received  10 , 1  from  1  on thread  27640
Received  11 , 50000001  from  2  on thread  27640
Received  12 , 100000000  from  3  on thread  27640
Received  13 , 150000001  from  4  on thread  27640
Received  14 , 200000001  from  5  on thread  27640

The thread ID is's always the same! And top confirmed that this wasn't a lie: one core was consuming 100% of the CPU, while the others were idle. It took some Googling to discover the GOMAXPROCS environment variable:

experiments, 505> export GOMAXPROCS=4
experiments, 506> go run multi_listener.go 
Main running on thread  27674 , numCPU =  8
Received  2 , 350000001  from  8  on thread  27677
Received  0 , 299999996  from  7  on thread  27678
Received  1 , 1  from  1  on thread  27674
Received  3 , 350000001  from  8  on thread  27677
Received  4 , 50000001  from  2  on thread  27679
Received  5 , 299999996  from  7  on thread  27678
Received  6 , 200000001  from  5  on thread  27674
Received  7 , 249999997  from  6  on thread  27677
Received  8 , 100000000  from  3  on thread  27679
Received  9 , 1  from  1  on thread  27678
Received  10 , 200000001  from  5  on thread  27674
Received  11 , 150000001  from  4  on thread  27677

This variable is documented in the runtime package docs, and also in the (28 page) FAQ. It's not mentioned in the Go Tour or tutorial.

I'm a bit taken aback that it's even necessary, however the comment that goes along with the associated runtime method gives a hit: “This call will go away when the scheduler improves.” As of Go 1.2, the behavior remains, one of the quirks of using a young framework.

Wednesday, December 18, 2013

Learning Go: Slices

Slices are one of the stranger pieces of Go. They're like lists or vectors in other languages, but have some peculiar behaviors; particularly when multiple slices share the same backing array. I suspect that a lot of bugs will come from slices that suddenly stop sharing.

To explain, let's start with a simple slice example: creating two slices backed by an explicit array (you can run these examples in the Go Playground):

package main

import "fmt"

func main() {
    a := []int{1, 2, 3, 4, 5}
    s1 := a[1:4]
    s2 := s1[0:2]
    
    fmt.Println(a)
    fmt.Println(s1)
    fmt.Println(s2)
}

When you run this program, you get the following output (the slice operator is inclusive of its first parameter, exclusive of its second):

[1 2 3 4 5]
[2 3 4]
[2 3]

As I said, these slices share a backing array. A change to s2 will be reflected in s1 and a:

func main() {
    a := []int{1, 2, 3, 4, 5}
    s1 := a[1:4]
    s2 := s1[0:2]
    
    s2[0] = 99
    
    fmt.Println(a)
    fmt.Println(s1)
    fmt.Println(s2)
}
[1 99 3 4 5]
[99 3 4]
[99 3]

If you're used to slices from, say, Python, this is a little strange: Python slices are separate objects. A Go slice is more like a Java sub-list, sharing the same backing array. But wait, there's more, you can add items to the end of a slice:

func main() {
    a := []int{1, 2, 3, 4, 5}
    s1 := a[1:4]
    s2 := s1[0:2]
    
    s2 = append(s2, 101)
    
    fmt.Println(a)
    fmt.Println(s1)
    fmt.Println(s2)
}

Since s2 shares backing store with s1 and a, when you append a value to the former, the latter are updated as well:

[1 2 3 101 5]
[2 3 101]
[2 3 101]

But now what happens if we append a bunch of values to s2?

func main() {
    a := []int{1, 2, 3, 4, 5}
    s1 := a[1:4]
    s2 := s1[0:2]
    
    s2 = append(s2, 101)
    s2 = append(s2, 102)
    s2 = append(s2, 103)
    
    fmt.Println(a)
    fmt.Println(s1)
    fmt.Println(s2)
}
[1 2 3 101 102]
[2 3 101]
[2 3 101 102 103]

Did you see that one coming? Here's one more piece of code to ponder:

func main() {
    a := []int{1, 2, 3, 4, 5}
    s1 := a[1:4]
    s2 := s1[0:2]
    
    s2 = append(s2, 101)
    s2 = append(s2, 102)
    s2 = append(s2, 103)

    a[3] = 17
    
    fmt.Println(a)
    fmt.Println(s1)
    fmt.Println(s2)
}
[1 2 3 17 102]
[2 3 17]
[2 3 101 102 103]

As you can see, s2 no longer uses a as its backing array. Not only does a not reflect the last element added to the slice, but changing an element of a does not propagate to s2.

This behavior is hinted in the slice internals documentation, which says that append() “grows the slice if a greater capacity is needed.” Since Go requires you to pay attention to return values, whatever code appended to the slice will always see the correct values. But if you have multiple slices that you think refer to the same backing array, the others won't.

I can understand why you would want slices to share a backing array: they represent a view on that array. And I can understand the desire for expanding the slice via append(): it's the behavior that other languages provide in a list. But this blend seems to be the worst of both worlds, in that you never know whether a changing a given slice will mutate other slices/arrays, or not. I recommend treading carefully.

Tuesday, December 17, 2013

Learning Go: Syntax Quirks

Whenever you approach a new language you'll stumble over things that are almost, but not quite, the same as the language you're currently using. Coming from a C and Java background, I've had several such experiences with Go. This post is about two of the syntactical oddities: things that result in compiler errors and not bugs. Future posts will be about things that show up at runtime.

Short-form Declarations

There are two ways to declare and initialize a variable in Go:

var x int = 10
y := 10

The first form is readily understandable to Java or C programmers, even though the syntax is strange: it defines a variable of type int, and assigns the value 10 to it. The second form relies on type inference to do almost the same thing.

I say “almost” because the literal 10 is a numeric constant, which is permitted to have arbitrary precision. And that means that you have to know what type the compiler will pick. Not really an issue, as short-form declarations are primarily used for function returns, and the compiler will quickly correct you if you assume incorrectly.

A somewhat more annoying quirk is re-assignment, as shown below:

x := 10
x, y := 20, 30      // this compiles
x := 30             // this doesn't

You're allowed to re-assign a variable that was formerly assigned using a short-form declaration only if you also assign a newly-declared variable at the same time. I suppose that the rationale is to prevent accidental overwrites of a variable, while still allowing a single “error” variable for function calls, but I think it just adds to the mental effort needed to understand a program. In some cases you can use “:=”, in others you must use a plain “=”; I prefer consistency.

Braces or Parentheses?

Like most C-family languages, Go uses braces to delimit blocks, and parentheses to delimit function arguments. What's confusing is that some things that you expect to be blocks or functions aren't. For example, the import declaration:

import (
    "fmt"
)

That certainly looks like it should take a block: it's a declaration, not a function call. But those are parentheses, not braces, and I have no idea why.

And going the other way, we have the initialization of a struct (this example copied from the Go Tour):

type Vertex struct {
    X int
    Y int
}

func main() {
    v := Vertex{1, 2}
    // ...

I'm always tripping over this one: I think it's a constructor function, but there aren't any constructors in Go. It's actually a composite literal, much like a Java array initializer.

Monday, December 16, 2013

Learning Go

A few months ago I decided that Go was a language that I wanted to learn. Step one was to find a local Go user group: there's no subsitute for discussing the quirks of a new language with other people. The group I joined has a mix of people that are using Go as part of their daily work, along with those like me who are just trying to pick up the language.

Step two was to download the distribution. It comes with a webserver that appears to have the entire golang.org website, or at least all of the documentation and wiki pages. That's useful, as most of my time for learning is (was) while riding the train. I'm not entirely certain that I like running a server in order to browse documentation — no, actually I'm certain that I don't — but it seems to be the way the world is moving.

Step three was to download the Go Tour (also available online). This is, without question, the best tutorial that I've ever used. It reduces the language to about 60 bite-size pieces, with an example program for each, and includes a service that lets you run those programs from within the page. It's not quite a REPL, but it's close (the online Go Playground, is the program runner without the tutorial).

As a teaching tool, the Tour is nice in that you can easily page up and down between lessons. This is particularly useful when doing one of the exercises, where you're presented with an empty function and have to remember syntax. The only problem with the Tour is that it doesn't save the programs in local storage: when you close your browser window, your work is gone.

The next few posts will give my initial impressions of Go, including some of the places that I think bugs will lurk. I'm also thinking of a somewhat more substantial program to give me a real sense of how the language works.

Saturday, November 30, 2013

Accessing the World Via Cellphone

My mother doesn't have a connection to the Internet. For that matter, she doesn't have a computer; if I ever want to explore the reasons why I'm a closet Luddite, I need look no further. This makes the Thanksgiving holiday a bit challenging: I want to spend the entire week visiting (less traffic, less overall stress), but usually need to work at least a couple of days.

It used to be that there were plenty of neighbors with open wifi, but in the past few years they've either changed providers or learned about security; now everyone in range requires a password. We've tried going to the town library, but that's really not a great place to work (when did librarians switch from shushing patrons to talking loudly amongst themselves?). And we've spent days with family and friends, but it's hard to actually focus on work in that situation.

This year I decided to try something different: bring my Internet with me via a mobile hotspot. I'd used Clear WiMax at a client site for a couple of months, and occasionally tether my cellphone, so thought that it might be a viable solution. After some research I settled on a prepaid 5 gigabyte plan from T-Mobile. Their coverage map indicated an excellent signal at my mother's house (unlike AT&T, where I'm lucky to get one bar on the second floor), there was no long-term commitment, and the price was good ($42.40: $30 for service, $10 for a SIM, and $2.40 for tax). I was also able to use my old (unlocked) Galaxy Nexus as the hotspot: it's limited to 3G, but I didn't think the extra speed of LTE justified the $150 cost of a new device.

Overall, the experiment was a success: I consistently saw download speeds in the 2-4 Mbit/sec range, and was able to browse the web, connect to remote servers via SSH, and participate in video chats without problem. Not quite what I'm used to from my home FiOS service, but better than the ADSL service that I used for ten years.

The major problem that I experienced was that it took a long time to initiate connections. It didn't matter whether I was opening an SSH session or a web page; I'd wait several seconds until the data started flowing. Once the connection was established, data flowed without delay.

Well, almost without delay: when downloading large files, I could see the rate fluctuating, from a high of around 600 kbytes/sec down to a low under 10 kbytes/sec, with occasional stalls. I'm not sure whether that was due to rate limiting on the part of T-Mobile, or competition for the network. At one point I looked at the phone's status screen, and saw the network type repeatedly switching between UMTS, HSDPA, and HSPA+, so the latter seems most likely. But video calls didn't seem affected, other than occasional blurring.

Bottom line: for less than the cost of a tank of gas, I was able to work in my pajamas. Well worth it.

Tuesday, November 12, 2013

What Can You Build in a Day?

I spent this past Sunday morning as a mentor at the Pilot Philly hackathon. It's a 24-hour hackathon for high school students, and going in, I had some trepidation. I envisioned a post-Katrina-esque setting, filled with teenage boys who hadn't slept in 24 hours. I'm also a bit concerned that hackathons foster a culture of “build it fast and who cares what happens next” that isn't compatible with long-term software development.

And I had a commitment for Saturday. But I exchanged a few emails with the organizers, they said they'd take any time I could give, so I arrived at 9 AM on Sunday. While the workspace — with tables jammed together and sleeping bags strewn on the floor — did have a post-apocalypse gestalt, my fears of chaos turned out to be unwarranted: the teams were focused on getting their projects done.

My gender assumptions also turned out to be wrong: while I didn't count, my sense was that at least a quarter of the participants were young women. I know that there are several organizations that focus on breaking down the gender divide in technology. Either they're having an effect, or the “digital generation” views such divides as ancient history.

I was pointed at a team building an app to select random takeout from local restaurants (“when you don't want to make a decision about lunch”). They had reached the point where they were getting data from the third-party service, but were having some trouble extracting the data they needed and formatting it in HTML (“if this was a command-line app we'd be all over it”). They were working in Python, a language that I've used only recreationally, but debugging is something that I can do. We worked through logging, and discussed how to filter the service results. I helped them with HTML, and showed them some of the resources that I use. And in a short time, the app was working.

I say “a short time,” but it was really the entire morning. At one point, while the team was working on details, I sent my wife a message that I'd be home around 1:30 for lunch. Her response: “you mean 2:30, right?” I have to be honest: it's been years since my at-work mornings passed so quickly.

And that is my take-away from this experience. I hope that, via anecdotes and advice, I have made these four kids better programmers. But there's no question that they've reminded me why I got into this field in the first place.

Friday, November 8, 2013

Review: Coursera's "Functional Programming Principles in Scala" by Martin Odersky

I just completed this class as part of my introduction to Scala. For those of you that aren't familiar with Coursera, it is one of a growing number of organizations that provide free online education, taught by professors from well-known universities. These organizations are part of a revolution in education, one that might just change the way we think about post-secondary education. At present, they provide an excellent way to sharpen one's skills.

This was my first Coursera class, and I was impressed by its basic mechanics: the class website, lectures, and assignments. I'm not certain how much of this is attributable to the Coursera team, and how much to Martin Odersky's grad students. I expect the automated grading tools to be the latter, but the general website to be shared between classes. Unfortunately, it appears that you can't look at a class' content without signing up, so I'll have to wait for my next class to make a comparison.

If you're thinking of taking an online class, I caution you not to underestimate the time commitment. This class was advertised as 5-7 hours a week, which doesn't seem like much until you try to fit it into your schedule. I gave over a couple evenings a week for lectures, along with several hours for assignments (usually before work, but occasionally on weekends). Since I got my undergraduate degree in a part-time program, taking two classes a semester in addition a full-time job as a software developer, I didn't think I'd have trouble with the workload. I don't think my lifestyle has changed much in the intervening dozen years, but it was tough to fit this class in.

On to the class. It was seven weeks long, with a couple of hours of lectures each week and a total of six programming assignments (some spanning more than a week). Lectures were broken into segments of 10-30 minutes, focusing on a single topic. However, they weren't simply videos: each lecture had one or more “quizzes” embedded within it, requiring viewer interaction and occasional typing. As a result, you'll need access to the Internet; they aren't something that you can complete on a plane.*

Assignments are packaged as ZIP files containing skeleton source code and an Eclipse project configuration. They can also be built using the sbt build tool, and you need to use this tool to submit them. The source code provides stub methods for the expected implementation and a few minimal unit tests; you are expected to add more tests while completing the assignment. When you submit an assignment, an automated grading tool runs a battery of unit tests and a style check (don't use short-circuit returns!). If you fail any of the unit tests, you'll see the test output but not the test itself; sometimes it's a challenge to figure out what the grader is trying to test.

If you've watched the MIT 6.001 lectures by Abelson and Sussman (aka the SICP lectures), you'll find the Odersky lectures very similar, at least at the beginning (to the point of using many of the same examples). Another point of similarity is that the lectures do not focus on the language itself. For Scheme, that's not so bad: the language is simple. For Scala, you'll want to have an introductory book by your side (for example, Odersky's own Programming in Scala). You'll also want to keep the Scala Standard Library documentation open.

My chief complaint with the course is that it occasionally focused on functional programming to the detriment of good programming, particularly in the assignments. For example, the assignment on Huffman coding used classes as mere data containers, moving all of the logic into standalone functions. Within the context of the assignment — pattern matching — this approach makes sense. However, in the real world I would want to see this implemented with runtime polymorphism: pattern matching in this example is just a glorified switch.**

That said, I highly recommend this class. If you haven't worked with a functional language in the past, you'll find the ideas new and (hopefully) interesting. Even if you have used a functional language, you might find something new: for me, the assignment on “functional sets” (implementing the Set abstraction in pure code) was particularly interesting. And if you're using Scala professionally, I think it's worthwhile to see how the creator of the language approaches problems.

I'll finish with an unexpected insight: Java's generics implementation — and in particular, that parameterized collections don't behave like arrays — was 100% intentional. I'm not sure if you'll be able to watch this lecture, but if you do, you'll see (about 7 minutes in) Odersky explain how Java arrays are bad because they don't allow the compiler to catch certain type errors (the same material is also covered in Programming in Scala). After seeing this, I dug up the JSR-14 spec, and saw that yes, Odersky was one of the people responsible.


* Actually, it may be possible to watch on a plane; a quick session with Firebug indicates that the videos are straightforward HTML5. I haven't looked at the code behind them, so it's possible that the quizzes are implemented in JavaScript.

** My concern is that the assignments distribute logic rather than encapsulate it. A more egregious example, but one less open to explication, is in a later assignment: defining a simple type alias to represent a list of pairs. What the simple alias does not convey, however, is that the list must remain sorted. An object-oriented approach would maintain this invariant in the class; consumers would never see/create an unsorted list. The functional approach, by comparison, requires every function that transforms the list to maintain the invariant.

Friday, October 25, 2013

Deprecation and Technical Debt

API designers deprecate classes and methods as a signal that they shouldn't be used. Usually because there's a better way to do the same thing, but occasionally (as in the case of Java's Date constructors) because the existing methods have unintended consequences. Deprecation is the first step in controlled breakage of backwards compatibility, a warning for users that they can't rely on the functionality always being present.

Of course, in the real world, deprecated functionality rarely gets removed. The Date constructors have been with us since Java 1.0.x and show no sign of going away. Indeed, of all the major frameworks that I've used, the only one that takes deprecation seriously is Spring — and even then, only minor things seem to get removed; you can still compile a program written for Spring 1.x with only a few changes.

However, leaving deprecated code in your codebase is a form of technical debt. You run the risk that the functionality will one day be removed, and that you'll have to scramble to replace it. But more important, if your code contains a mix of deprecated and non-deprecated features, you have to expend extra effort to maintain it.

The worst case that I've seen was a company that built websites for clients. One of their features was a content management system: a tool that allowed the client to create their own content, inserted into the page when various conditions were met. Standard stuff, but this company actually had three incompatible content management systems, all home-grown. Each was developed to replace the previous.

The reason that all three systems remained in active use was that there was a cost to convert to the newer framework, and neither the company nor their clients were willing to pay that cost. In effect, they chose to leave deprecated code in their codebase.

Since the frameworks were all built in-house, there was no reason to worry about them going away unexpectedly. But what did go away were the developers who had built the frameworks, and with them the knowledge of how the frameworks worked. Although the older frameworks didn't have a lot of bugs, when one did appear it was a crisis situation, with a mad search to find someone with enough knowledge to track it down and fix it. Even without bugs, the lack of knowledge meant that every enhancement to the client site took extra time, as each new developer learned enough to avoid breaking the CMS.

This situation continued for as long as I worked at that company; for all I know, they've added a fourth or fifth system to the mix. Although the old systems imposed an unnecessary cost on each project, that incremental cost was always lower than the cost to upgrade. But over time, the costs added up, and the client paid much more than an outright conversion (which the company didn't mind, as that increased revenue).

Worse, developers did all they could to avoid contact with the older systems, or to have their names on any checkins associated with it. Getting pulled onto a CMS problem meant that your own project schedules would slip, and you'd have to listen to upset project managers waste more of your time. Much easier to let the other team sink or swim on their own.

I'm not saying that ignoring deprecations will create disfunction and discord within your development team. But do you want to risk it?

Friday, October 18, 2013

Immutability and Thread-Safety, Part 2

I've had a few offline conversations about my last post, and decided to dive a little deeper into concurrency and immutability. As an example, I'll use a shared map: the sort of thing you might use to hold session data in a web application. The first implementation simply wraps Scala's immutable.Map:

class SharedMap {

  var theMap : Map[String,String] = Map.empty;

  def size = theMap.size

  def get(key:String) = theMap.get(key)

  def put(key:String, value:String) {
    theMap = theMap + (key -> value)
  }
}

Because theMap is immutable, you have the following guarantee: put() will not interfere with get(). If they happen to be called at the same time, the get will use the old instance of the map, and not have to worry about partial changes due to the put. This is a powerful guarantee, but it does not mean that this code is thread-safe.

The problem is that this implementation allows an update race: if two threads call put() at the same time, they make their own immutable changes to the same immutable base map. But neither's changes include the other's, and only one change will be saved as the new base map.

Appending an item to an immutable map is a very fast operation. So fast that you might never hit the update race in testing. But in a production environment, with dozens or hundreds of concurrent updates, it is bound to happen, and you'll lose data. To see this happen rather spectacularly, run the following program.*

object ConcurrentUpdateRunner extends App {

  val map = new SharedMap

  val pool = java.util.concurrent.Executors.newFixedThreadPool(20)

  val futures = (1 to 100).map(ii => {
    val callable = new java.util.concurrent.Callable[String]() {
      def call() = {
        val x = Thread.currentThread.getName
        map.put(ii.toString, x)
        x
      }
    }
    pool.submit(callable)
  })

  val results = futures.map(_.get)
  println(s"number of executed Callables: ${results.size}");

  println(s"final map size: ${map.size}")
}

So what can you do?

One option is to introduce synchronization. For a simple data structure like this map, that's the route that I'd take, although some people see any use of synchronization as taboo.

def get(key:String) = theMap.get(key)

def put(key:String, value:String) {
  synchronized { theMap = theMap + (key -> value) }
}

Note that I just synchronized put(), something that I warned against in a previous post. But that warning applies only to mutable data structures. Since the underlying map in this example is immutable, we can get use half-synchronization without worry, and avoid contention between get and put.

A second alternative is to forego the Scala library, and use a ConcurrentHashMap.

At first glance, this appears to be a perfect solution: you don't have to think about concurrent access, just rely on the fact that Doug Lea understands it far better than you. However, a ConcurrentHashMap only guarantees thread safety of individual gets and puts. For our trivial map example this is sufficient. For most real-world applications it isn't, because you need atomic updates of data that's already in the map.

A third option, which I mentioned in my last post, is to introduce a message queue between the map and its consumers. This is not a particularly novel idea: Odersky suggests it in Programming in Scala (in the section “Good actors style”), and Armstrong uses a similar example in Programming Erlang. Here is one possible implementation, using Scala actors (which, although deprecated, are usable without the need for an additional library in the classpath).

class ActorMap1 {

  case class SizeMessage
  case class GetMessage(key:String)
  case class PutMessage(Key:String, value:String)

  val theActor = new DaemonActor {
    var theMap : Map[String,String] = Map.empty;

    def act() {
      loop {
        receive {
          case SizeMessage() => {
            sender ! theMap.size
          }
          case GetMessage(key) => {
            sender ! theMap.get(key)
          }
          case PutMessage(key,value) => {
            theMap = theMap + (key -> value)
          }
        }
      }
    }
  }

  theActor.start

  def size = {
    val fut = theActor !! new SizeMessage
    fut.apply
  }

  def get(key:String) = {
    val fut = theActor !! new GetMessage(key)
    fut.apply
  }

  def put(key:String, value:String) {
    theActor ! new PutMessage(key, value)
  }
}

This example adds quite a bit of complexity to the shared map: new classes to hold messages for the actor, and a facade object to handle the communications. And it still has a significant flaw: there's only one loop, handling both gets and puts. We've achieved thread safety, at the cost of introducing a bottleneck for concurrent operations.

In the case of an in-memory map, this bottleneck probably isn't significant. Unless you have dozens of threads constantly hitting the map, to the exclusion of other actions, the occasional waits will go unnoticed. But what if this map is actually a write-through cache for a database or socket? In that case, the put will take significant time, and contention becomes a real possibility. To resolve this problem, we can add another actor to the mix.

class ActorMap2 {

  case class SizeMessage
  case class GetMessage(key:String)
  case class PutMessage(Key:String, value:String)
  case class InternalPutMessage(Key:String, value:String)

  val getActor = new DaemonActor {
    var theMap : Map[String,String] = Map.empty;

    def act() {
      loop {
        receive {
          case SizeMessage() => {
            sender ! theMap.size
          }
          case GetMessage(key) => {
            sender ! theMap.get(key)
          }
          case InternalPutMessage(key,value) => {
            theMap = theMap + (key -> value)
          }
        }
      }
    }
  }

  val putActor = new DaemonActor {
    def act() {
      loop {
        receive {
          case PutMessage(key,value) => {
            // some long-running operation
            getActor ! InternalPutMessage(key, value)
          }
        }
      }
    }
  }

  getActor.start
  putActor.start

  def size = {
    val fut = getActor !! new SizeMessage
    fut.apply
  }

  def get(key:String) = {
    val fut = getActor !! new GetMessage(key)
    fut.apply
  }

  def put(key:String, value:String) {
    putActor ! new PutMessage(key, value)
  }
}

Yet more complexity, and it still isn't complete: put() doesn't have any way to signal the caller that an error occurred, and real-world actor systems need supervisors. Plus, there's still the possibility that parts of the application will hold onto stale data (a problem that can never truly be solved).

But that was the entire point of my last post. It isn't easy to correctly implement concurrent applications, and immutable data structures don't help you solve the bigger issues. If you rely on them to do so, you will be burned. You need to understand how the larger application mutates state, and choose an appropriate strategy for managing those mutations.

Finally, although in this example it appears to be a case of hunting mosquitoes with a shotgun, decomposition into independent actors does solve many of the bigger problems of concurrency. But that's simply because it forces you into the techniques of information hiding and limited interfaces that have been part of software engineering since the 1970s.


* I used a threadpool, rather than actors, to have better control over the number of threads. I'm also running on a multi-core processor, which means that I'm really hitting the map from multiple threads at the same time. Switching between single-core and multi-core machines tends to highlight different threading bugs.

Monday, October 7, 2013

The Big Myth of Functional Programming: Immutability Solves All Problems

I'm not against immutability. Indeed, I think that many concurrency problems can be eliminated by making data structures immutable. Most of those problems, however, are caused because their developers never really thought about concurrent access. And while switching to immutable data structures may solve some problems, it creates others — and in my opinion, those others are much more difficult to debug.

The underlying problem is that the whole point of a program is to modify state. A program that takes no inputs and produces no outputs is worthless, except as a way to turn an expensive CPU into a space heater.

But if the purpose of a program is to modify state, how does it hold the state that's being modified? A “pure” functional program must use the arguments and local variables of its functions. There is a top-level function, which creates the program's initial state, passes that initial state to functions, and gets back the final state.

And if your program has a single thread, or multiple independent threads, that's all you need to do (I also like the idea of decomposition into a tree of independent threads). But if your program consists of communicating threads, the purely functional, immutable data approach is not sufficient: different threads will hold different representations of what they consider the same data. Also known as a race condition.

The easiest way to solve such problems is to introduce a well-designed concurrent data structure (such as Java's ConcurrentHashMap) that is shared between threads. This data structure holds the canonical view of shared state: each thread reads the map whenever it needs to know the latest data. However, a concurrent map by itself isn't sufficient to solve race conditions: it guarantees atomic puts and gets, but not updates.

A better alternative, in my opinion, is to follow the Go mantra of “share by communicating, rather than communicate by sharing.” In other words, wrap your shared state in a data structure that has a message queue between it and the rest of the program. The rest of your program appears to be fully functional: mutations are just function invocations, and you can choose to implement your shared data using immutable objects.

This approach doesn't completely eliminate races: there is still a race between writes and reads (also known as “stale data”). However, there is no way to eliminate that particular race. No matter how hard you try, you can never guarantee that multiple independent threads have a consistent view of the world.

But stale data in one part of your program is far better than missing data due to an unrealistic expectation of what immutable data structures give you.

Monday, September 16, 2013

Thoughts on Scala Before Using It

While I think Go is my future, I'm about to embark on a project using Scala. So I've spent the last week reading Programming in Scala, by Martin Odersky et al. And I've signed up for the Coursera class Functional Programming Principles in Scala, which starts today. I do have some experience with a functional approach to programming, so it's not a completely foreign land. But I lean strongly toward an object-oriented, message-driven (push) approach to decomposing problems; the functional (pull) approach will take some getting used to.

This is the first of two posts. I'll write the second six months or so from now, after I've spent enough time with the language to feel that I “get it” — or don't. Here, then, are my first impressions, after diving just below the surface.

Things that I like

Implicit conversions
I'm not sure that no longer having to call toString() explicitly (with a null check) is a reason to use Scala, but it's close. Conversion code is almost always clutter, sometimes quite complex clutter. Being able to move that clutter into a method, and rely on the compiler to invoke the method as needed, is fabulous.
Null is an object
Java's concept of null, and the need to explicitly test for null-ness, is one of the less pleasant things that it inherited from C. It's a leading cause of code clutter, and drives the creation of utility classes that provide null-safe operations. I haven't looked at all the methods that Scala's Null provides, but equals() is enough to make me happy.
Traits
There's a well-worn adage to “favor composition over inheritance.” This can result in a plethora of function calls that simply call a delegate, violating the adage “don't repeat yourself.” Traits make composition easy, creating delegates and then hiding them behind a curtain of syntactic sugar. I will say that I've looked behind that curtain, and don't think I ever want to find myself debugging them.
For Comprehensions
This is a neat feature, which I first saw in Erlang. It makes for a very simple way to apply filters to a list's elements before taking some other action on those elements, and I agree that it largely replaces break and continue.
Match expressions
Another feature that has its roots in Erlang (or earlier inference languages). I'm especially impressed by constructor matching, which leverages the immutability guarantees of a “case” class (again, I looked behind the curtain: while I think the code generation could use some work — particularly reuse of local variables — the approach is elegant).

Things that I don't like

Optional semi-colons
Are semi-colons really that evil? They certainly aren't hard to type: the key is right there on the home row (at least on a US keyboard). Yet Scala, like JavaScript, makes them optional. Of course, the compiler still needs to decide where to end a statement, so has to have other rules. In the case of Scala, the end of a line is the end of a statement, unless it doesn't make sense for a statement to end at that point.

I personally find this rule annoying for complex expressions, which I want to format on multiple lines, with the operator first on the line (so that it's obvious). Scala, however forces me to put the operator on the end of the line, where it's hard to see, because otherwise the compiler thinks the preceding term is the end of the current statement. Perhaps this is a not-so-subtle nudge toward simpler expressions, but there are some cases where a real-world rule has lots of terms; it's just easier to read if the terms are on separate lines, even if they're function calls.

OK, that's just me whining. But, like other languages that attempt to do away with statement separators, Scala sometimes requires you to use them. One example is a variable declaration that precedes a standalone block. Not that you're likely to use a standalone block unless you're demonstrating scoping rules. But if you have to do it sometimes, why add the mental load of remembering when those times are?

Too many symbolic operators
This was one of my first impressions of Scala, after seeing a JUG presentation several years ago. I'm neutral on the subject of operator overloading: it can be abused, but I've seen enough elegant overloads to think it has value. My issue with Scala is that any arbitrary string of apparent line noise is an operator. This creates unnecessary mental load: is “::” really better than “concat”, or “<-” better than “in”? I don't think so.
Precedence and associativity rules
Languages like C++ provide operator overloading: you can create new definitions for existing operators, which follow the rules of the language's grammar. Scala provides operator creation: syntactic sugar that makes arbitrarily-named methods look like they're part of the language. But the precedence and associativity rules that underly this feature are Byzantine to say the least. There's really no excuse for this: if you're going to put programmers in the position of being language designers, you need to give them the tools to do the job properly. In this case, explicit definitions of a function-operator's precedence and associativity..
Underscores as parameter placeholders
This is, I think, conciseness taken too far: rather than define the parameters of a function literal, you use underscores in the function body; each underscore takes the place of a sequential parameter. Intellectually, I get it. Looking at code filled with underscores, I don't. And if you want to use the same parameter in multiple places, you're required to explicitly define it.

Better, in my opinion, would be an explicit parameter syntax. Personally, I like the way the shell does it: $1, $2 and so on for individual params, $* to represent all params.

Things that have me scratching my head

Primary constructor parameters and their relationship to fields
Here's the example given by Odersky (I'm copying this from the book, so errors are mine):
class Rational(n: Int, d: Int) {
  // ...
}
This makes perfect sense: by defining the class' member variables as constructor parameters, you save the boilerplate of explicit variable definitions and assignments. But then, a page later, he shows that they're not really fields in the Java sense: it's a compile-time error if you try to access them using another instance of the class:
def add(that: Rational): Rational = 
  new Rational(n * that.d + that.n * d, d * that.d)
Now, if you look in the actual class code, you'll find that these constructor parameters are, in fact, fields in the class. Unless you continue the example, create explicit fields, and don't otherwise use the constructor parameters in your code. For a language that eschews boilerplate, I just don't get it.

It is, however, something that I've seen before: Scala's primary constructor looks and acts just like a JavaScript constructor function (which is something that experienced JavaScript programmers don't often use). I can understand the pattern for JavaScript, which doesn't have rigidly-defined classes, but it doesn't make much sense for Scala.

Not everything is an expression
I've never liked the separation between statements and expressions, and Scala erodes that boundary in many places. The if expression is one such place, function returns are another. And yet, the boundary remains in several surprising places: a while statement is one example, but the truly baffling one is variable assignment. While I don't often use constructs such as x = y = 0, I do use the Java (and C) idiom of assigning a variable and testing its value (eg: (s = in.readLine()) != null)). But not in Scala.

The only reason that I can imagine is that such behavior is functionally impure: I should use an iteration function that calls another function to process the iterated value, so the language will push me in that direction. I suspect the same reasoning is why while doesn't return a value: we shouldn't be using while.

Thursday, August 15, 2013

A Journey to the Kingdom of Verbs

HRH, DGR Terra Nominem, greetings

Your Grace. In furtherance of your desire to establish diplomatic relations with our neighbors, I have traveled to the Kingdom of Verbs to meet with Queen Repl. I fear that our potential for commerce is limited. They follow exceedingly strange customs, which I shall attempt to relay by anecdote.

To all appearances Queen Repl is the sole permanent inhabitant of the land. She creates minions from pure thought, and these minions exist solely at her discretion. The queen will create complex hierarchies of these minions to fulfill a task, and when the task is complete, the minions disappear. While some of the minions are given names, those names describe but do not define them; each minion is a unique entity, created for a single purpose.

The method by which these minions interact is also exceedingly strange; it took me months to understand it. While we follow the dictum of “tell, don't ask,” they abide by “ask, and you shall receive.” And while we use diverse techniques such as Queues and Busses to decouple Producers and Consumers, they know exactly whom to ask for a service and do not look elsewhere.

Data is a topic that is rarely discussed in this land; indeed, at times it seems almost a taboo. In some cases, when asked for data, a minion will go so far as to create another minion to provide the answer, rather than speaking directly. I can only imagine this as dedication to Platonism: there is an ideal, and there are forms of that ideal, and minions work with the forms while the ideal remains unknowable. Again, it seems uncommonly strange to us, but appears to work for them.

However, this attitude toward data left me with a question: whence comes inputs to the land, and where goeth outputs? Or is the land truly inward facing, minions performing actions without effect? The answer to this question, as you might surmise, was critical to my ability to engage the Queen.

I found the answer to this question in the monadii. They go by various names, but their role is to provide minions with an ultimate source and sink for data. At first, I thought they were like the Flappers of Laputa, standing in the background to make their masters aware of the outside world. As time passed, I realized that they also were bound by the dictum of the land, and only spoke when spoken to. It was my great fortune to learn that there was a minion responsible for listening to the monadii and asking the Queen to interpret their words.

To bring my story to a close, and not waste more of your precious time, I was able to establish contact with two of these monadii, who then provided my bona fides to the Queen and returned her reply to me. While I do not know what form of intercourse our two countries can engage, I am hopeful that Nouns and Verbs will find common ground.

I look forward to my next assignment, the Land of Inference. Your most humble servant,

(signed) Reginald, extends Diplomat, implements Plenipotentiary

Tuesday, August 13, 2013

Thoughts on Unvarying Variables and One-Instance Classes

Closures scare me. There, I've said it, now the cool kids won't speak to me. Which is strange, because the cool kids all seem to be into functional programming, and I see full-fledged closures as the antithesis of functional programming. Perhaps we are using different closures.

Clearly, definitions are in order, and my definition of a closure starts with it being a function that has the characteristics of an object: it can be referenced by a variable and passed as an argument to another function. But my definition doesn't stop there. Critically, a closure has unrestricted access to variables defined in its enclosing scope.

Here's an example of a closure in action (if you don't have Go installed, you can paste this into the Go Tour to run it). Everything here happens in the main thread, but I could create a goroutine to run either function, or I could pass these functions into other functions.

package main

import "fmt"

func main() {
    x := 0

    f1 := func() {
        x += 1
    }

    f2 := func() {
        x += 2
    }

    f1()
    f2()
    
    fmt.Println("x = ", x)
}

Some people will see this and say “well, I'd never use closures this way!” I'll act as devil's advocate by saying that sometimes — sometimes — using closures this way is incredibly useful. But not often. And, unfortunately, I've seen a lot of closures used in exactly that way, often unintentionally. And that's what scares me.

For a long time I thought that my fear was simply a fear of unbridled mutability. Indeed, the first version of this post (drafted several years ago but never published) started with the following:

Var, huh! Good God, y'all, what is it good for?

That, and the title, were just too good to discard with the rest of the text. If you're too young to get the reference, don't worry; it isn't really the cause of my fear.

For the last few weeks I've been discussing my fear with friends, and I've arrived at a better reason: closures create an ad hoc class, whose methods are conjoined solely by the fact that they share common variables. As a person who believes that object-oriented programming has something useful to offer, I find this deeply disturbing. It violates several principles of object-oriented design, not least being the single responsibility principle.

But worse, in my mind, is that the methods so conjoined don't have a name. They are simply actions that happen to know about the same Thing. OK, I realize that some people scorn the idea that all Things should have names. Perhaps those same people name all of their variables a1, a2 and so on. Personally, I stopped doing that when I stopped programming in BASIC.

Named classes, when used properly, provide a way to decompose a program into clearly bounded units of functionality. The methods in the class are all related in common purpose to provide that functionality. And again, used properly, names are an important part of this decomposition; if you see that a frob is getting fribbled, your attention turns immediately to the Fribbilator (or maybe the FrobManager, but definitely not the Bargulator). With closures, and their ad hoc relationships, your task becomes much harder: you might have to walk the entire codebase.

Does this mean I reject closures? No, of course not, not even closures that make use of their ability to mutate enclosing state; as I said, sometimes that's useful. But my fear — which I now name Caution — makes me ask the following question as I'm coding: “should this be a closure or a class?”

Friday, August 2, 2013

Where Do I Want to Go?

TLDR: The title is a pun.

In the past year or so I've been looking at different languages, trying to gain a sense of what I might do “after Java.” My investigations have been cursory, as they must be: adopting a (bad) Matrix reference, I don't think you can truly know a language until you have fought with it. But maybe you can get a sense of whether it's worth fighting. So here are the languages I've considered, and my impressions:

Scala
Many of my colleagues believe that Scala is the best “JVM language”; we're doing several projects with it, and offer several classes in the language. I like some of the ideas that it incorporates, especially the clear delineation between mutable and immutable data (and a strong preference for the latter).

But … it's ugly. My first impression of Scala was that it had too many symbolic operators. As I kept seeing it, my impression changed: it had too many features and ideas, period. And based on discussions with my colleagues, there isn't an easy path to learning the language: you need to program idiomatically from day one.

That said, it looks like Scala will be the next language that I use professionally, probably within the next few months. Maybe my opinion will change.

Clojure
I don't get LISP, and the people who claim to get it always seem a bit too religious to me (not merely religious, but too religious). Perhaps I will be enlightened if anyone can explain exactly why macros are different from any other interpreter — or for that matter, any type of code generation. For now, however, Clojure (as the most-commercial LISP) isn't in my future.
Groovy
Why isn't Groovy more popular than it is? Of the features that I listed in my previous post, Groovy has everything except a concurrency story. It's easy to write, runs on the JVM, can interact directly with Java libraries, and offers some neat features of its own. Yet the only people who seem to use it are those who adopted Gradle as their build tool.

It was while I was trying to learn Gradle that I decided Groovy wasn't for me. To truly understand Gradle, you need to understand the Groovy approach to building DSLs by blending functions and blocks into a seamless whole. It's quite impressive once you figure it out. But, like Scala, there's no easy path to knowledge. The path from “Hello, World” to idiomatic Groovy involves a quantum leap.

And the documentation, quite frankly, sucks. It's a bunch of disconnected wiki pages that assume you know what you're looking for before you look for it. Perhaps not so bad on its own, but I believe it belies the approach to Groovy's language design: it's trying to be the new Perl.

Ruby
I've worked a bit with Ruby. It's nice, but I don't find myself wanting to devote the next section of my career to being a “Ruby developer.”
JavaScript
Does JavaScript have a future outside of the browser? I don't think so, Node.js notwithstanding. I think the biggest issue is the lack of library support: if I want a feature that's not part of the core language I need to write it myself. It's much like C++ before the STL (and while JQuery is great for interacting with the DOM, it's not a general-purpose library).

I also make far too many typos to be a happy JavaScript programmer. Making this worse, most of the frameworks that I've used swallow exceptions.

Erlang
Erlang is a strange language; it's very obvious that it started life as a rules engine. It has some extremely interesting features, such as list comprehensions that are closer to a database query than anything you'll find elsewhere. And most important, it has a notion of “shared nothing,” message-passing concurrency that I like.

But … it's a strange language. Actually, the way that I often describe it is “primitive,” designed to solve the problems of the 1980s. There's heavy emphasis on buffer manipulation, while strings are treated as lists of (ISO8859-1!) characters. I'm seeing ever more projects that use it, but I think there are better alternatives.

Python
Python is a beautiful language: every time I get to use it, I smile. Unfortunately, as far as I can tell (based on not-so-regular attendance at the local Python Users' Group), it's confined to scripting and “need it now and not tomorrow” programs. It also doesn't have a good (to me) concurrency story.
Go
Go has many of the features that I consider important, and it has an impressive pedigree (Kernighan and Pike). It does have some strange quirks, which will be the topic of future posts. And it's a young language, still evolving; code written today may need to be rewritten tomorrow.

That said, Go seems to make the most sense for my future. Like Java, it was a language created for a clear purpose (concurrent back-end applications), and I happen to think that the future mainstream will be concurrent. Better then to have a language that was designed for that purpose, rather than one that has concurrency bolted on.

Coming up: my adventures in learning Go.

Monday, July 29, 2013

Notes from OSCon

I spent last week at OSCon, the O'Reilly Open Source conference. It was my first time in Portland, and I prepared by watching segments from Portlandia on YouTube. I fell in love with the city, but had mixed feelings about the conference.

OSCon is big. Four and a half days of talks, with up to 18 different choices for each time slot. Topics range from Perl (it started as the Perl Conference) to “Geek Lifestyle.” My current focus on alternatives to Java drove many of my choices, but I tried to sample a few topics that were new to me. Here, then, are my top take-aways from the conference:

Invest in disk drive manufacturers

This point was driven home by Jay Parikh's keynote speech. He is the VP of infrastructure engineering at Facebook, which receives 7 petabytes of new user photos each month. Users have the expectation that those photos will always be available and never go away. In response, Facebook designed a dedicated storage rack, and a new building to house them: it will have three server bays, each holding 1 exabyte of storage.

One exabyte is 1,000,000 terabytes. Which, given 4TB drives, doesn't translate to that many physical units. But Facbook is only one part of the story. Worldwide data storage is counted in zettabytes (1,000 exabytes), and is growing exponentially. At least for the foreseeable future, traditional “spinning rust” drives are the most cost-efficient way to store that data. While there may not be a lot of profit in hard drives, volume is important. And the win will go to manufacturers that reduce electrical consumption: to support 1 exabyte of storage, Facebook is using 1.5 megawatts of electricity; a 20% improvement would be huge.

As an aside, this also tells me that Facebook is a screaming short: you can't sustain exponentially-growing costs indefinitely.

Java-the-Language really is fading

The Java and JVM track had exactly one presentation that was primary about Java-the-language, versus three that were primarily about Scala. More telling is to compare the Java track with JavaScript and Python: the latter were filled with talks about using the languages to solve problems, while the former featured frameworks.

OK, part of this may be selection bias specific to the conference. But it's another data point for my career planning.

“Clean Code” still isn't mainstream

In one talk that I attended, the presenter demonstrated how a relatively short Erlang function could be improved by breaking it into smaller pieces. While I enjoyed the presentation, I was compelled to point out that it was simply a restating of modular decomposition, a practice that has been part of software engineering since the 1970s. Or, applied after the fact, refactoring, which has been popular since the late 1990s.

I wonder if clean code is a habit that only comes from experience?

I prefer smaller conferences

While OSCon was big, with lots of sessions on many different tracks, I found it lacking: few of the talks provided insight, and many were downright boring (to the point that I walked out of a couple). This may have been due to my selection of talks, but I got a similar impression from other people that I talked to.

I think that part of the reason for this is that a conference the size of OSCon tries to cater to many needs, from the Perl programmers that were its core constituency, to people writing mobile apps. By comparison, a smaller conference (for example, ETE, which is produced by my employer) has the ability to focus: a few topics, but deep presentations on them.

Friday, July 19, 2013

What is Compelling?

For all that Java got right, I'm looking for something new. I can't claim that my motivation is a good one: I'm starting to feel that Java is déclassé, occupying a “mainstream niche” in which it provides database access to web applications. Yes, there are some interesting things happening with Big Data and Java, but most of the interesting projects that I've seen look to alternative languages even if they keep the JVM.

However, that presents the question “What is compelling about other languages?” Is there anything in the non-Java space that has a story similar the one I saw from Java in 1999? There has been a lot of language development in the last decade or so, with new languages appearing and older languages gaining in popularity. Most of these languages have features that Java doesn't. Do any of those features present a compelling case to leave Java behind?

Before continuing, I want to state that I do not consider one language to be more “powerful” than another. That term gets a lot of use and abuse, but it's meaningless — at least if both languages are Turing-complete. Instead, I look at a language based on how expressive it is for a particular purpose. For example, APL is an extremely expressive language for an extremely narrow purpose.

My purpose is best described as “applications that transform quantum data, possibly in a distributed fashion.” This is almost a simile for “general purpose” computing, but highlights a few things that I find important:

  • Applications have multiple parts, and will run for an extended period of time. This implies that I want easy modularization, and that performance is important.
  • Quantum data is a fancy way of saying that I want to work with individual pieces of information, on both the input and output side. So statistical analysis features are not terribly important to me, but data manipulation features are. As I side note, I don't accept the LISPer's view that data-is-code-is-data.
  • Transformation has similar implications.
  • My view of “distributed computing” has more to do with how work is partitioned, rather than where those pieces run. Features that support partitioning are important, particularly partitioning of independent concurrent tasks. Once you partition your processing, you can move those pieces anywhere.

So, given this mindset, what are some features that Java doesn't have and that I find interesting?

Interpreted versus Compiled
One of the big things that drew me to Java was its elimination of the traditional build cycle: I didn't have to take a nap while building the entire project. This point was driven home to me last year, talking with a neighbor who is doing iOS development. He was touting how fast the latest high-end Macs could build his projects; I pointed out how the IDE on my nine-year-old desktop PC made that irrelevant.

Build cycles haven't completely gone away with Java: deploying a large Spring web-app, for example, takes a noticeable amount of time. But deployment takes time, even with an “interpreted” framework like Rails. For that matter, client-side JavaScript requires a browser refresh. But saving a few seconds isn't really important.

The REPL (read-eval-print loop), on the other hand, is a big win for interpreted languages. I came of age as a programmer when interactive debugging was ascendant, and I like having my entire environment available to examine and modify.

Duck Typing
Another of the benefits of so-called “scripting” languages is that objects are malleable; they are not rigidly defined by class. If you invoke a method on an object, and that object supports the method, great. If not, what happens depends on the language; in many, an “unimplemented method handler” is called, allowing functionality to be added on the fly. I like that last feature; it was one of the things that drew me to Rails (although I didn't stay there). And I think that function dispatch in a duck-typed language is closer to the “objects respond to messages” ideal of OOP purists.

On a practical basis, I find myself wishing for duck-typing every time I have to repeat a long parameterized type specification in Java. Especially when I have to repeat that specification both to declare and assign a variable. I've often wished for something like typedef, to reduce typing and provide names for generic structures.

But … there's a certain level of comfort in knowing that my compiler will catch my misspellings; I make a lot of them. And at the extreme, the flexibility of this model is no different than the idea of a single method named doSomething that takes a map and returns a map.

Lambdas, Closures, Higher-order Functions
At some point in the last dozen years, function pointers became mainstream. I use that term intentionally: the idea of a function as a something that can be passed around by your program is fundamentally based on the ability to hold a pointer to that function. It's a technique that every C programmer knows, but few use (and if you've ever seen the syntax of a non-trivial function pointer, you know why). It's a feature that is present in Java, albeit with the boilerplate of a class definition and the need to rigidly (and repeatedly) define your function's signature. I think that, as a practical thing, duck typing makes lambdas easier: a function is just another malleable object.

Which is a shame, because the idea of a lambda is extremely useful. For example, reading rows from a JDBC query requires about a dozen lines of boilerplate code, which hides the code that actually processes the data. Ditto for file processing, or list processing in general.

While I like lambdas, closures quite frankly scare me. Especially if used with threads. But that's a topic for another post.

Lazy Evaluation
I don't believe that “functional” languages are solely about writing your code as a series of higher-order functions. To me, a critical part of the functional approach is the idea that functions will lazily evaluate their arguments. The example-du-jour calculates the squares of a limited set of integers. It appears to take the entire set of integers as an argument, but lazy evaluation means that those values are only created when needed.

As people have pointed out, lazy evaluation can be implemented using an iterator in Java. Something to think about is that it also resembles a process feeding a message queue.

Structured Data Abstraction
One of the things I like about Groovy is that everything looks like an object, even XML. Combined with the null-safe navigation operator, you can replace a dozen lines of code (or an XPath) with a very natural foo?.bar?.baz. If your primary goal is data manipulation, features like this are extremely valuable. Unfortunately, the list of languages that support them is very short.
A different concurrency paradigm
Concurrent programming isn't easy. Programming itself isn't easy: for any non-trivial application you have to maintain a mental model of how that application's internal state changes over time, and which parts get to change what state. A concurrent application raises the bar by introducing non-determinism: two (or more) free-running threads of execution that update the same state at what seem to be arbitrary times. Java provided a model for synchronizing these threads of execution, at least within a single process, but Java synchronization brings with it the specter of contention: threads sitting idle waiting for a mutex.

Concurrency has been a hidden issue for years: even a simple web-app may have concurrent requests for the same user, accessing the same session data. With the rise of multi-core processors, the issue will become more visible, especially as programmers try to exploit performance gains via parallel operations. While Java's synchronization tools work well for “accidentally concurrent” programs, I don't believe they're sufficient for full-on concurrent applications.

New languages have the opportunity to create new approaches to managing concurrency, and two of the leading approaches are transactional memory and actors. Of the two, I far prefer the latter.

Those are the features that interest me; other people will have different ones. But given those desired features, along with my overall goals for what I'm doing with computers, I've surveyed the landscape of languages and thought about which could replace Java for me. That's the topic of my next post.

Wednesday, July 17, 2013

Things that Java Got Right

Java turns 18 this year. I've been using it for 14 of those years, and as I've said before, I like it, both as a language and as a platform. Recently, I've been thinking more deeply about why I made the switch from C++ and haven't looked back.

One well-known if biased commentator said that Java's popularity was due to “the most intense marketing campaign ever mounted for a programming language,” but I don't think that's the whole story. I think there were some compelling reasons to use Java in 1995 (or, for me, 1999). And I'm wondering whether these reasons remain compelling in 2013.

Before going further, I realize that this post mixes Java the language with Java the platform. For the purposes of this post, I don't think they're different: Java-the-platform was created to support Java-the-language, and Java programs reflect this fact. Superficially, Java-the-language is quite similar to C or C++. Idiomatically, it is very different, and that is due to the platform.

Dynamic Dispatch
Both Java and C++ support polymorphism via dynamic dispatch: your code is written in terms of a base class, while the actual object instances are of different descendant classes. The language provides the mechanism by which the correct class‘ method is invoked, depending on the actual instance.

However, this is not the default mechanism for C++, static dispatch is. The compiler creates a mangled name for each method, combining class name, method name, and parameters, and writes a hard reference to that name into the generated code. If you want dynamic dispatch, you must explicitly mark a method as virtual. For those methods alone, the compiler invokes the method via a reference stored in the object's “V-table.”

This made sense given the performance goals of C++: a V-table reference adds (a tiny amount of) memory to each object instance, and the indirect method lookup adds (a tiny amount of) time to each invocation. As a result, while C++ supported polymorphic objects, it was a rare C++ application that actually relied on polymorphism (at least in my experience in the 1990s). It wasn't until I saw dozens of Java classes created by a parser generator that I truly understood the benefits of class-based polymorphism.

Today, the prevalence of “interpreted” languages means that dynamic dispatch is the norm — and in a far more flexible form than Java provides.

Late Binding
At the time Java appeared, most applications were statically linked: the compiler produced an object file filled with symbolic references, then the linker replaced these symbolic references with physical addresses when it produced the executable. Shared libraries existed, and were used heavily in the Windows world (qv “DLL hell”), but their primary purpose was to conserve precious RAM (because they could be shared between processes).

In addition to saving RAM, shared libraries had another benefit: since you linked them into the program at runtime, you could change your program's behavior simply by changing the libraries that you used. For example, if you had to support multiple databases you could write a configuration file that selected a specific data access library. At the time, I was working on an application that had to do just that, but we found it far easier to just build separate executables and statically link the libraries. Talking with other people, this was a common opinion. The Apache web server was an exception, although in its case the goal was again to save RAM by not linking modules that you didn't want; you also had an option to rebuild with statically-linked libraries.

Java, by comparison, always loads classes on an as-needed basis when the program runs. Changing the libraries that you use is simply a matter of changing your classpath. If you want, you can change a single classfile.

This has two effects: first, your build times are dramatically reduced. If you change one file, you can recompile just that file. When working on large C and C++ codebases, I've spent a lot of time optimizing builds, trying to minimize time spent staring at a scrolling build output.

The second effect is that the entire meaning of a “release” changes. With my last C++ product, bug fixes got rolled into the six-month release cycle; unless you were a large customer who paid a lot in support, you had to wait. With my first Java project, we would ship bugfixes to the affected customers as soon as the bug was fixed. There was no effort expended to “cut a release,” it was simply a matter of emailing classfiles.

Classloaders
Late binding was present in multiple other languages at the time Java appeared, but to the best of my knowledge, the concept of a classloader — a mechanism for isolating different applications within a single process — was unique to Java. It represented late binding on steroids: with a little classloading magic, and some discipline in how you wrote your applications, you could update your applications while they were running. Or, as in the case of applets, applications could be loaded, run, and then be discarded and their resources reclaimed.

Classloaders do more than simply isolate an application: the classloader hierarchy controls the interaction of separately-loaded applications (or, at least, groups of classes). This is a hard problem, and Java classloaders are an imperfect solution. OSGi tries to do a better job, but adds complexity to an application's deployment.

Threads
Threads are another thing that predated Java but didn't see a lot of use. For one thing, “threads” gives the impression of a uniform interface, which wasn't the case in the mid-1990s. I've worked with Posix threads, Solaris threads, and one other whose name I can't remember. They all had the same basic idea, but subtly different APIs and implementation. And even if you knew the APIs, you'd find that vendor-supplied code wasn't threadsafe. Faced with these obstacles, most Unix-centric programmers turned to the tried-and-true multi-process model.

But that decision led to design compromises. I think the InConcert server was typical in that it relied on the database to manage consistency between multiple processes. We could have gotten a performance boost by creating a cache in shared memory — but we'd have to implement our own allocator and coherence mechanism to make that work. One of the things that drove me to prototype a replacement in Java was its ability to use threads and a simple front-end cache.

I could contemplate this implementation because Java provided threads as a core part of the language, along with synchronization primitives for coordinating them. My cache was a simple synchronized Map: retrieval operations could probe the map without worrying that another thread was of updating it. Updates would clear the cache on their way out, meaning that the next read would reload it.

That said, today I've come to the belief that most code should be thread-agnostic, written as if it were the only thing running, and not sharing resources with other threads. Concurrent programming is hard, even when the language provides primitives for coordination, and in a massively-parallel world contention is the enemy. A shared-nothing mentality — at least for mutable state — and a queue-based communication model makes for far simpler programs and (I believe) higher overall performance.

Library Support
All of the above were neat, but I think the truly compelling reason to use Java in the 1990s was that it came with a huge standard library. You wanted basic data structures? They were in there. Database access? In there. A way to make HTTP requests? In there. Object-oriented client-server communication? Yep. A GUI? Ugly, but there.

This was a time when the STL wasn't yet ubiquitous; indeed, some C++ compiler vendors were just figuring out how to implement templates (causing no end of problems for cross-platform applications that tried to use them). The RogueWave library was popular, but you had to work for a company willing to buy it. I think every C++ programmer of the 1990s had his or her own string class, with varying levels of functionality and correctness. It was a rite of passage, the first thing you did once you figured out how classes worked.

Java's large — and ever-growing — library has been both a blessing and a curse. It's nice to have one standard way to do most tasks, from parsing XML to computing a cryptographic hash. On the other hand, in JDK 1.6 there are 17,484 classes in 754 packages. Many imported whole from third-party libraries. This is bloat for those who don't need the features. Worse, it creates friction and delay for updates: if you find a bug in the XML processor, should you file it with Oracle or Apache? And will it ever get fixed?

Those were the things that I found compelling about Java. Other people had different lists, but I think that everyone who adopted Java in the late 1990s and early 2000s did so for practical reasons. It wasn't simply Sun's marketing efforts, we actually believed that Java offered benefits that weren't available elsewhere.

The 2000s have been a time of change on the language scene: new languages have appeared, and some older languages have become more popular. What would make a compelling case for leaving Java behind?

Monday, July 15, 2013

Signs that Java is Fading

My website is losing traffic. It never had terribly high traffic, other than the days it was mentioned on Reddit or Hacker News, but in the last year that traffic has been cut in half. There are no doubt many reasons for this, but I'm going to take a dramatic interpretation: it's another indication that the time of Java is passing.

Bold words for someone who only gets a few hundred hits a day, but the character of those hits are changing. The two most-hit pages on my site have consistently been those on bytebuffers and reference objects: topics that are of interest to people doing “interesting” things, and not something that you'd use in a typical corporate web-app. Recently, these two have been eclipsed by articles on parsing XML and debugging out-of-memory errors.

This sense that Java isn't being used for “interesting” projects comes from other sources as well. Colleagues and friends are looking into other languages, some on the JVM and some not. One of the best recruiters I know, a long-time specialist in Java (and founder of the Philadelphia Java Users Group) is now emphasizing other languages on his home page.

Of course, Java isn't going to disappear any time soon. There are billions of dollars of sunk costs in Java software, and it needs to be maintained. And with hundreds of thousands of Java programmers in the job market, employers know that they can always staff their projects, old and new. Corporate inertia is all but impossible to overcome, as evidenced by the fact that you still see plenty of COBOL positions open.

Java is occasionally called “the COBOL of the 21st century,” and I've never found the comparison apt. But the position of Java today seems to be similar to that of COBOL in the 1980s: a huge installed base, huge demand for developers, but slotted into a specific application area. If you wanted to venture out of the accounting realm, you looked for something else.

Yesterday was my birthday. This upcoming January will mark 30 years that I've received a regular paycheck for making computers do what other people want. In that time, I've switched areas of expertise several times. I figure that I'm going to be in this business for another 15 to 20 years, and I want to be doing something interesting for those years. So it's time to re-evaluate Java, and think about what my next area of expertise should be.

Thursday, July 11, 2013

Is anyone else waiting for this?

Google Search will be retiring on October 23

Today we're announcing a new Connected Search feature for Google+. We believe that this will be an easier way to find the information you need. Once you add Search to your circles, you'll see every request you ever made, right there on your Google+ home page. As the search results change, you'll see those changes in real-time.

We realize that some of you have grown to like the existing Google Search feature, with its single text field and “I feel lucky” button. While we regret the need to discontinue this service, we think you'll find that Search on Google+ will serve all your needs.

Tuesday, June 18, 2013

Perils of Reflection: Public Methods on Private Classes

Here's a puzzle: what does the following code print:

Iterator itx = new ArrayList().iterator();
Method method = itx.getClass().getMethod("hasNext");
System.out.println(method.invoke(itx));

You might expect it to print false: you're invoking the hasNext() method on an iterator over an empty list. Instead, you get a rather confusing exception::

Exception in thread "main" java.lang.IllegalAccessException: Class com.example.ReflectionFailure can not access a member of class java.util.AbstractList$Itr with modifiers "public"
    at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:65)
    at java.lang.reflect.Method.invoke(Method.java:588)
    at com.example.ReflectionFailure.main(ReflectionFailure.java:16)

Huh? IllegalAccessException indicates that the program doesn't have access to the method. But hasNext() is a public method, how can I not have access to it? The following code compiles and runs without a problem, aren't I doing the same thing with reflection?

Iterator itx = new ArrayList().iterator();
System.out.println(itx.hasNext());

Well, not exactly. If you look at the bytecode of the explicit example, you'll see that you're actually invoking Iterator.hasNext(), not the method defined by the implementation class:

15:  invokeinterface #32,  1; //InterfaceMethod java/util/Iterator.hasNext:()Z

And if you change the reflective code so that you get hasNext() from the Iterator class, it runs without a problem:

Iterator itx = new ArrayList().iterator();
Method method = Iterator.class.getMethod("hasNext");
System.out.println(method.invoke(itx));

But the first piece of code isn't doing that. Instead, it's trying to do something like the following:

AbstractList.Itr itx = (AbstractList.Itr)new ArrayList().iterator();

That code won't compile, because AbstractList.Itr is a private class; the compiler rightly complains that it isn't visible.

I'm not the first person to stumble over this. There are a number of bugs entered about this problem, although they seem to focus on the use of an inner class. The problem still manifests even if you have a top-level protected class in another package. There's also a posting on the Java Specialists blog that covers the problem and its workaround:

Iterator itx = new ArrayList().iterator();
Method method = Iterator.class.getMethod("hasNext");
method.setAccessible(true);
System.out.println(method.invoke(itx));

Method.setAccessible() tells the JVM to allow the calling program to bypass some of the normal security measures; its brother, Field.setAccessible() is a mainstay of injection frameworks and bean introspecters. There's only one problem: if you're running in a sandbox you aren't allowed to tell the JVM to ignore its safety features, you'll get a SecurityException instead.

The only always-usable solution is to figure out where the method is actually defined, and use the Method corresponding to that definition. This is not a trivial task: you have to walk the class hierarchy, looking at all of the concrete or abstract (but still accessible) classes in the hierarchy, as well as all of the interfaces that they implement.

So is this a bug in the JVM? If you couldn't use reflection to accomplish something that you could in explicit code, I'd say yes. In this case, however, you aren't mimicking actual code, you just think you are. So, while inconvenient, in my opinion not a bug.


Update: I've been researching how to determine whether or not a Method can be invoked without an access error. I assumed that the isAccessible() would work: if it returned true, I was good to go. I quickly learned that it always returns false, unless you've specifically called setAccessible(true). Not, in my opinion, terribly useful.

I've combed through the documentation several times, but can't find any method that will tell me if I can invoke a Method. I suppose this is reasonable: accessibility depends on the object doing the invoking. This means that any “find the accessible method” utility would have to take the invoking class as an argument, and there would be no guarantee that the returned method could be invoked by an instance of another class. Not worth pursuing.