Showing posts with label career. Show all posts
Showing posts with label career. Show all posts

Saturday, April 9, 2016

Polyglot Programmers

I recently received an email from a well-known consulting firm that said, in essence, “you're a polyglot programmer, we're looking for those, let's talk.”

I can see where they got that impression: in the last three years I've worked with Java, JavaScript, Scala, Ruby, Python, Clojure, and SQL. In my 30+ year career, I've worked with over 15 languages. And there are a half-dozen more that I haven't used professionally. But why would a recruiter look for that?

One simple answer is that, as a consulting firm, they have to staff projects for clients with differing development environments. A person who knows multiple languages will be easier to assign. But I think there's more to it than that: “polyglot programmers” and “polyglot application” are an approach to programming that is espoused by this same firm. One that I don't agree with.

Neal Ford is often credited with coining the term polyglot programming, in a 2006 blog post. His thesis — examined in greater detail by Dean Wampler in this 2010 presentation — is that different languages have strengths in different areas, and programmers should use the best tool for the job.

I don't think that it was coincidence that this philosophy emerged in the mid-2000s, from people associated with the Java ecosystem. This was a time when Java-the-language had stagnated but the JVM was the base for a flourishing community of “not Java” languages. Groovy in 2003, Scala in 2004, and Clojure in 2007 are some of the notable examples, although there are many others. All of these languages offered things that Java did not, higher-order functions being one of the more obvious. And all of them offered the ability (more or less) to interoperate with existing Java code.

Given these new abilities, it seemed only reasonable to adopt them: to use Groovy for your XML processing, or Scala for-comprehensions to process nested structures, or rewrite your multi-threaded code to use Scala actors. And perhaps not stop there, but adopt Martin Fowler's strangler application pattern, replacing the underlying Java code entirely. And who wouldn't want to leave behind Java's verbosity and definitional boilerplate?

Ten years have passed since Neal Ford's post, and I question whether the premise is still valid (if it ever was). Here are a few reasons:

  • Convergence of languages

    The mid-1960s also saw a flowering of programming languages with widely varying features: APL (1964) for mathematics, BASIC (1964) as an introductory language for education, PL/I (1965) for large-scale applications, Simula (1965) for simulating real-world interactions, SNOBOL (1962) for text processing, and many others. I was a toddler at the time, so have no first-person knowledge, but I think that these languages arose from much the same situation: mainstream languages lacked features, and the reduced cost and increased sophistication of computer systems lowered the barrier to creating something new.

    But by the end of the 1970s, most of these languages were on the path to obscurity (BASIC notwithstanding), and older languages had adopted many of the ideas that they promoted (even FORTRAN became block-structured). And then there was C, a new language that borrowed ideas from the earlier languages but recast them in a form that was better suited to the rising world of microprocessors.

    If history repeats itself, I think we're in the process of a similar consolidation. Java now has higher-order functions; if that was your reason for looking elsewhere, is it still valid? JavaScript is available outside the browser, it's performant, and now has a vast collection of libraries; is there still a reason to use separate languages for your front and back end code? Or will there be something new that takes over the world?

  • Increased maintenance cost

    Many of the people in favor of writing portions of their system in alternative languages point to the efficiency with which they can write code. And while I accept the truth of that, I also realize two things. First, that writing code is a tiny part of initial implementation. And second, that initial implementation is a tiny part of the effort required during an application's lifetime.

    Given this, using multiple languages for a single application means that all of the people tasked with its maintenance have to be at least comfortable with all of those languages, if not experts. This can be particularly painful if one of the team members has a fondness for an obscure language.

    But even in the case of popular languages, it can be a problem. One of my projects was with a company whose main service was written in a mix of Node.JS and Rails. I think that their original goal was to transition from Rails to Node, but it went awry: the Rails developers left and the Node developers realized that Node didn't (at that time) have all the features they needed. So both codebases remained a core part of the business, and the company had to find people with both skillsets in order to maintain the software.

  • Programmer focus

    In my opinion the best reason to be wary of polyglot projects is that developers can only know a limited number of languages — I use 1½ as my personal limit. Oh, sure, I've met people who claim to know a half-dozen or more languages. But when pressed, they only “know” those languages at a very basic level.

    True knowledge extends past syntax and semantics, to idiom and environment. It is the ability to choose the best implementation for any given goal, without thinking — the stage of learning known as “unconscious competence.” And with programming languages, the best implementation may be very different depending on the language.

    Which is not to say that you can't transition from one language to another. That's easy, I've done it several times in my career. But you'll find that the language you're most familiar with colors the way you work with the new language — when I started working with JavaScript, I attempted to use Java-style constructor functions rather than a map of data. You'll know when you've transitioned when the new language changes the way you write the old.

While I don't think that polyglot applications are a good idea, I wholeheartedly support learning multiple languages as a way to expand your abilities. A few years ago I worked my way through Programming Erlang, and consider that time well-spent: it introduced me to pattern matching, gave me a deeper understanding of actor systems, and showed me an elegant (if inefficient) way to implement Quicksort.

But it didn't make me want to mix Erlang and Java in a single project.

Saturday, March 26, 2016

Layoffs

Where are you in your company's layoff list? Every company has one, even if it's just a vague idea in the mind of the CEO. By looking around, you should be able to determine approximately where you stand. In my department of 20 people, for example, I believe myself to be #3. Maybe #2, perhaps as low as #5. But one of the first people to go, regardless.

Let me stop here, to stress that being laid off does not mean that you're incompetent, or less competent than the people who were retained.

Layoffs are a coldly rational response to the prospect of running out of money. Sometimes they happen well in advance of a crisis, sometimes they happen when the company is on the brink of bankruptcy. But in either case, being laid off simply means that the skills you bring to the table are perceived as being less valuable than the associated cost of your salary and benefits. I realize that's not much comfort when you're the person being laid off.

I've been laid off three times in a 30+ year career. The first time was six months into my second job, and happened on the Monday that I returned from vacation. I was working for a subsidiary of a midwestern company that made laboratory instruments, an acquisition that had been an independent startup a year or so earlier. For whatever reason, the parent company decided to consolidate operations at their home office. We were all gathered into the large conference room, and told that we no longer had jobs. The third time was similar: I was working for a subsidiary of a subsidiary of a large telecom company that learned too late that flip-phones were passé.

There's not much to be learned from these two examples, nor much to be done about them; you simply have to be prepared. If you're lucky, they come with a decent severance package.

My second layoff was more interesting. It happened toward the end of the dot-com crash. We'd been through three rounds of layoffs already, along with an enforced vacation during the winter holidays. At the start of the new year I sat down with my division VP, and said “I know I'm now at the top of the layoff list, let's talk.” Six months later (after I'd been given a large retention bonus) the fourth round hit, and he said “you were right.”

So how did I know? Like many things, your likelihood of being laid off can be expressed as a quad-chart:

quadchart

The left axis is salary: the more expensive you are, the bigger a target you become. You may argue that this doesn't take into account productivity, but I believe that's irrelevant. If you're at a company that values productivity, then everyone will be productive; the idea that there's a 10x difference between top and bottom is a myth. And if you're at a company that doesn't value productivity, you're just a cog in the machine with a known cost, regardless of how productive you think you are (and really, what are you doing at such a company?).

The bottom axis, strategic versus tactical, is the one that I think is more interesting. Strategic work is for the future: shaping your next product or next release. Tactical work is about now: operations, bugfixes, responding to immediate customer desires. When a company is forced to choose, they'll value tactics above strategy every time. In the case of my second layoff, I wasn't the highest-paid person but my work was entirely focused on the future; that put me at the top of the list.

So what do you do with the knowledge that you're near the top of the layoff list?

The facile answer is that you update your resume, but that's wrong on several counts. First, of course, is that your resume should always be up to date, in case the perfect job gets dropped in your lap. Second, if you change jobs you're likely to move from the top of one layoff list to the top of another, without the offsetting benefit of tenure (yeah, it isn't a simple quad-chart; it never is). And finally, because knowing where you are on the list only matters when layoffs are imminent.

If you're in a company that's sailing along, with no storms on the horizon, there's little reason to care. Yes, there's always the chance of a black swan event that puts the company in crisis, but those aren't worth sleepless nights.

If you work for a company that does have obvious financial difficulties ahead, you should consider your options. Perhaps you transition from a strategic position to one that's more tactical. Or, again, do nothing. I work for a venture-funded startup, which means that either we become profitable or go out of business. But I'm doing the job that I was hired for, I'm enjoying it, and I'm prepared for the possibility of finding myself without a job.

And if you work for a company that has had layoffs and looks like it will have more? Perhaps then it's time for an honest conversation with your manager (or better, your manager's manager). Talk about where you stand on the layoff list, and what steps you can take to either lower your position or ease your departure.

In my case, I was already thinking about moving to Philadelphia to be with my now-wife. Perhaps my conversation with our VP raised me to the very top of the list if I wasn't already there. But it meant that both of us were prepared for what happened six months later.

Friday, August 28, 2015

Thoughts on Clojure After Using It For Six Months

When I was in high school, in the late 1970s, “pocket” scientific calculators were hot. They were portable computation devices that hadn't even existed a decade prior, were incredibly expensive when they came out, but were now reasonable birthday presents for the budding geek. There were several manufacturers, but the only ones that mattered were Texas Instruments (TI) and Hewlett Packard (HP). The cool kids all had HPs.

There's no question: HP calculators were better. A TI calculator was about half the price of an HP calculator with the same features, but it reached that price point by cheaper build quality. You could drop an HP calculator down a flight of stairs without ill effect, but a TI would start to have keyboard problems within a year of normal use. In fact, that's how I got my first calculator: my father's TI stopped working, he sent it to TI for repair, but bought another before it returned.

Another difference between TI and HP was that the former used algebraic notation: “1 + 2 =” while the latter used reverse Polish notation (RPN): “1 ENTER 2 +”. From the perspective of a calculator designer, RPN makes a lot of sense: algebraic notation requires more work, and the transistors that performed that work could be put to better use elsewhere. Or put another way — one intended to taunt an HP-wearing geek — RPN-based calculators were less advanced than algebraic calculators.

Here's the interesting part: the vast majority of those HP aficionados were convinced that RPN was the reason that HP calculators were better.

Monday, May 5, 2014

Thoughts on Scala After Using It for Six Months

I started working with Scala professionally last September. At the time, I wrote a post about the features of the language that I liked and didn't like, from the perspective of an experienced developer new to the language. My plan was to write a retrospective after several months of use, looking at how my feelings about those features had changed over time.

I wrote several drafts that followed that theme, but liked none of them. While I certainly built up a list of likes and dislikes, I found that those were overwhelmed by two general themes: I find the language clumsy, and it requires too much mental effort — effort that is taken away from whatever problem I'm trying to solve.

With that sentence, you may think the rest of this post is a (long) rant. That isn't my intent. I'm not trying to convince anyone that Scala is a horrible language; in fact, there are many parts of it that I quite like. It's simply one person's experience and opinion, with illustrative examples. Feel free to disagree with my points and with the examples I chose.

I'll start with clumsiness. The example that comes to mind most quickly is the special syntax needed for one function with a variable argument list to call another with the same argument list:

  def foo(xs: Int*) = ???
  
  def bar(xs: Int*) = {
    foo(xs)     // this won't compile
    foo(xs:_*)  // this will
  } 

I have no doubt that there is a theoretical reason for this behavior. From the practical perspective, however, it's annoying: I can't take a parameter and pass it as an argument, even though the values are declared identically. It's not a big annoyance (and I find that I use varargs far less in Scala than in Java), but every time that I run into it — or any other syntactical oddity — I have to stop and think about why the compiler is complaining.*

And that is part of my second issue: high mental effort, because I need to think about what the compiler is doing rather than what my code is doing. Two examples of this are type inference and for comprehensions.

My issues with Scala type inference surprised me. I've worked with duck-typed languages, and never felt that their complete absence of type information impeded my understanding of code. With Scala, however, I don't know what I'd do without my IDE showing me type info on hover (and, unfortunately, it doesn't do that very well).

I think that the reason is that Scala functions tend to be more complex, with chains of higher-order function calls that may themselves require type inferencing from other functions. Such code usually makes perfect sense, but requires the programmer to carefully piece together what's happening at each step. I've adopted the habit of adding a return type specification to every function and minimizing the complexity of anonymous functions, and find these go a long way toward demystifying such constructs.

Something that I find harder to resolve are for-comprehensions. I first worked with for-comprehensions (aka list-comprehensions) in Python, and the mental model from that experience was reinforced when I learned Erlang. In both of these languages, a for-comprehension translates into nested iteration (with access to enclosing scope). **

Scala, by comparison, translates a for-comprehension into map() and flatMap() calls. On the one hand, this lets you do cool things like stringing together operations that return an Option: if any of the operations return None, the operation short-circuits and returns None. On the other hand, it makes you dependent on how a particular class implements those methods.

Here's a somewhat contrived example that represents a common use of for-comprehensions: flattening a hierarchical data structure.

val data = List(
            "foo" -> List("foo", "bar", "baz"),
            "argle" -> List("argle", "bargle", "wargle"))

val result = for {
  (key, values) <- data
  value <- values
} yield (key, value) 

The result is a list of tuples: ("foo", "foo"), ("foo", "bar"), and so on. Now a slight change:

val data = Map(
            "foo" -> List("foo", "bar", "baz"),
            "argle" -> List("argle", "bargle", "wargle"))

val result = for {
  (key, values) <- data
  value <- values
} yield (key, value) 

This comprehension translates into an identical sequence of calls, but now the outermost flatMap() is called on Map rather than List. This means that every tuple produced by yield is added to the map, with its first member as the key. And that means that the result contains only two entries, as repeated tuples with the same key are discarded.

You can look at this, say that it's not something you're likely to do, and moreover, that programmers should know the types of their data. But consider the case where data is actually a function call that's defined in some other module. That function originally returned the List, but then some developer noted that the keys are all unique, needed a Map for some other piece of code, and made the change. At that point, your for-comprehension has silently broken.

I'm going to give one more for-comprehension example; this one doesn't compile.

val data = Map(
            "foo" -> List("foo", "bar", "baz"),
            "argle" -> List("argle", "bargle", "wargle"))

val key = "foo"
val result = for {
  values <- data.get(key)
  value <- values
} yield (key, value)

The problem here is that Map.get() returns an Option, and Option.flatMap() expects a function that returns an Option. But the generator value <- values returns a List. To make this compile, you need to turn the Option into a sequence:

val result = for {
  values <- data.get(key).toSeq
  value <- values
} yield (key, value)

The overall issue is one of mental models. In Erlang, for-comprehensions represent a very simple mental model that can be applied identically in all cases. In Scala, the mental model seems equally simple at first, but in reality it changes depending on runtime data types. To use a Scala for-comprehension effectively, the programmer has to spend time thinking about how a particular class implements map() and flatMap() — and hope that nobody else mucks with the data.

How much mental effort? That's hard to quantify, but subjectively I feel that I take twice as long to do a task in Scala, even when the task is one that's most naturally implemented in a functional style.

Perhaps this is an indictment of my mental capacity, rather than the language. Or perhaps six months just isn't enough time to become productive with Scala. Either of those cases, however, begs the question of whether Scala is an appropriate language for an average development team. Because, regardless of whatever nice features a language provides, you don't want to choose a language that reduces productivity.


* My comment when I ran into this issue: “Whatever would Scala programmers do without an underbar to smooth over the rough spots?”

** To me, coming from a database background, the Erlang approach is very natural: it's equivalent to a query, with joins and predicates. In Scala, as long as every term produces a Seq, the behavior is identical.

Monday, February 3, 2014

Coder vs Engineer

Stack Overflow is a fabulous resource for programmers. When I have programming questions, the first page of Google results is filled with links to its pages, and they usually have the answers I need. So why do I often feel depressed after browsing its questions?

The answer came to me this weekend: it's a hangout for coders, not engineers.

The question that prompted this revelation was yet another request for help with premature optimization. The program in question was tracking lap times for race cars, and the OP (original poster, for those not familiar with the acronym) was worried that he (she?) was extracting the list of cars and sorting it after every update. He saw this as a performance and garbage-collection hit, that would happen “thousands of times a second.”

That last line raised a red flag for me: I'm not a huge race fan, but but I can't imagine why you would expect to update lap times so frequently. The Daytona 500, for example, has approximately 40 cars, each of which take approximately a minute per lap. Even if they draft, you have a maximum of 40 updates per second, for a rather small set of objects.

To me, this is one of the key differences between a coder and an engineer: not attempting to bound the problem. Those interview questions about counting gas stations in Manhattan are all about this. You don't have to be exact, but if you can't set a bound to a problem, you can't find an effective solution. Sure, updating and sorting thousands of cars, thousands of times a second, that might have performance issues. But that's not how real-world races work.

Another difference is that, having failed to bound the problem (indeed, even to identify whether there is a problem), the coder immediately jumps to writing code. And that was the case for the people who answered this particular question. Creating a variety of solutions that all solved some interpretation of the OP's problem.

And I think that's what really bothers me: that coders will interpret a question in whatever way makes their coding easiest. This was driven home by a recent DZone puzzle. The question was how to remove duplicates from a linked list, “without using a buffer.” It's a rather poorly-worded question: what constitutes a buffer?

There were a few people who raised that question, but by far the majority started writing code. And some of the implementations were quite inventive in their interpretation of the question. The very first response limited the input list to integers, and used a bitset to track duplicates (at the worst, that would consume nearly 300Mb of RAM — a “buffer” seems modest by comparison). Another respondent seemed to believe that a Java ArrayList satisfied the “linked list” criteria.

Enough with the rant. Bottom line is that this industry needs to replace coders by engineers: people who take the time to understand the problems that they're tasked to solve. Before writing code.

Monday, January 20, 2014

Thirty Years

Thirty years ago this week I started my first full-time, paid position as a software developer. I'm not sure of the actual day; when I moved to Philadelphia I purged a lot of old papers, and the offer letter for this job was one of them. I was a nineteen-year-old who had left college (that's another story), and whose father had been very clear about the consequences. Fortunately, the winter of 1983/84 was a good year to be looking for software jobs in the Boston area.

I landed at Think Technologies, the creators of Macintosh Pascal, as an entry-level programmer. My chief contribution to that product was a demo program that ended up as the box artwork (also shown in the link above).

I started the same week that the Macintosh was released. As a result, I didn't see either it or the product while I was interviewing. What I did see was the Apple Lisa, with its mouse and graphical UI, and the sense of “this is something big” moved Think to the top of my list.

In retrospect, Think may have been one of the most valuable jobs of my career, and not just for getting in on GUIs at the beginning. I learned a lot about startups, and about the pain of real-world software development. I also learned to keep my ego in check: the company was filled with bright people, and Mel, the person whose ideas were the base of the product, combined a particularly creative career with a reserved, self-effacing exterior.

And I learned that no job lasts forever. Eventually, you reach an intellectual or perceptual plateau, and it's time to move on to something new. As a professional, you must of course balance your personal desire for growth against your responsibilities to see your work in production. But it's easy to get trapped on the plateau, and that's something that I quickly learned to avoid.

The intervening years have seen a lot of change in the industry: graphical user interfaces are now the standard, as are integrated development environments (Macintosh Pascal wasn't the first IDE that I used, but it was the first good one); computer networks are pervasive, as is the resulting knowledge-web of Internet-hosted documentation and anonymous helpful strangers; web-apps have taken us back to a central server with dumb terminals, although client-side JavaScript is an echo of the micro-computer revolution; and lastly, the machines are almost infinitely more powerful.

But programming is still about moving bits from one place to another without dropping any.

Tuesday, November 12, 2013

What Can You Build in a Day?

I spent this past Sunday morning as a mentor at the Pilot Philly hackathon. It's a 24-hour hackathon for high school students, and going in, I had some trepidation. I envisioned a post-Katrina-esque setting, filled with teenage boys who hadn't slept in 24 hours. I'm also a bit concerned that hackathons foster a culture of “build it fast and who cares what happens next” that isn't compatible with long-term software development.

And I had a commitment for Saturday. But I exchanged a few emails with the organizers, they said they'd take any time I could give, so I arrived at 9 AM on Sunday. While the workspace — with tables jammed together and sleeping bags strewn on the floor — did have a post-apocalypse gestalt, my fears of chaos turned out to be unwarranted: the teams were focused on getting their projects done.

My gender assumptions also turned out to be wrong: while I didn't count, my sense was that at least a quarter of the participants were young women. I know that there are several organizations that focus on breaking down the gender divide in technology. Either they're having an effect, or the “digital generation” views such divides as ancient history.

I was pointed at a team building an app to select random takeout from local restaurants (“when you don't want to make a decision about lunch”). They had reached the point where they were getting data from the third-party service, but were having some trouble extracting the data they needed and formatting it in HTML (“if this was a command-line app we'd be all over it”). They were working in Python, a language that I've used only recreationally, but debugging is something that I can do. We worked through logging, and discussed how to filter the service results. I helped them with HTML, and showed them some of the resources that I use. And in a short time, the app was working.

I say “a short time,” but it was really the entire morning. At one point, while the team was working on details, I sent my wife a message that I'd be home around 1:30 for lunch. Her response: “you mean 2:30, right?” I have to be honest: it's been years since my at-work mornings passed so quickly.

And that is my take-away from this experience. I hope that, via anecdotes and advice, I have made these four kids better programmers. But there's no question that they've reminded me why I got into this field in the first place.

Friday, November 8, 2013

Review: Coursera's "Functional Programming Principles in Scala" by Martin Odersky

I just completed this class as part of my introduction to Scala. For those of you that aren't familiar with Coursera, it is one of a growing number of organizations that provide free online education, taught by professors from well-known universities. These organizations are part of a revolution in education, one that might just change the way we think about post-secondary education. At present, they provide an excellent way to sharpen one's skills.

This was my first Coursera class, and I was impressed by its basic mechanics: the class website, lectures, and assignments. I'm not certain how much of this is attributable to the Coursera team, and how much to Martin Odersky's grad students. I expect the automated grading tools to be the latter, but the general website to be shared between classes. Unfortunately, it appears that you can't look at a class' content without signing up, so I'll have to wait for my next class to make a comparison.

If you're thinking of taking an online class, I caution you not to underestimate the time commitment. This class was advertised as 5-7 hours a week, which doesn't seem like much until you try to fit it into your schedule. I gave over a couple evenings a week for lectures, along with several hours for assignments (usually before work, but occasionally on weekends). Since I got my undergraduate degree in a part-time program, taking two classes a semester in addition a full-time job as a software developer, I didn't think I'd have trouble with the workload. I don't think my lifestyle has changed much in the intervening dozen years, but it was tough to fit this class in.

On to the class. It was seven weeks long, with a couple of hours of lectures each week and a total of six programming assignments (some spanning more than a week). Lectures were broken into segments of 10-30 minutes, focusing on a single topic. However, they weren't simply videos: each lecture had one or more “quizzes” embedded within it, requiring viewer interaction and occasional typing. As a result, you'll need access to the Internet; they aren't something that you can complete on a plane.*

Assignments are packaged as ZIP files containing skeleton source code and an Eclipse project configuration. They can also be built using the sbt build tool, and you need to use this tool to submit them. The source code provides stub methods for the expected implementation and a few minimal unit tests; you are expected to add more tests while completing the assignment. When you submit an assignment, an automated grading tool runs a battery of unit tests and a style check (don't use short-circuit returns!). If you fail any of the unit tests, you'll see the test output but not the test itself; sometimes it's a challenge to figure out what the grader is trying to test.

If you've watched the MIT 6.001 lectures by Abelson and Sussman (aka the SICP lectures), you'll find the Odersky lectures very similar, at least at the beginning (to the point of using many of the same examples). Another point of similarity is that the lectures do not focus on the language itself. For Scheme, that's not so bad: the language is simple. For Scala, you'll want to have an introductory book by your side (for example, Odersky's own Programming in Scala). You'll also want to keep the Scala Standard Library documentation open.

My chief complaint with the course is that it occasionally focused on functional programming to the detriment of good programming, particularly in the assignments. For example, the assignment on Huffman coding used classes as mere data containers, moving all of the logic into standalone functions. Within the context of the assignment — pattern matching — this approach makes sense. However, in the real world I would want to see this implemented with runtime polymorphism: pattern matching in this example is just a glorified switch.**

That said, I highly recommend this class. If you haven't worked with a functional language in the past, you'll find the ideas new and (hopefully) interesting. Even if you have used a functional language, you might find something new: for me, the assignment on “functional sets” (implementing the Set abstraction in pure code) was particularly interesting. And if you're using Scala professionally, I think it's worthwhile to see how the creator of the language approaches problems.

I'll finish with an unexpected insight: Java's generics implementation — and in particular, that parameterized collections don't behave like arrays — was 100% intentional. I'm not sure if you'll be able to watch this lecture, but if you do, you'll see (about 7 minutes in) Odersky explain how Java arrays are bad because they don't allow the compiler to catch certain type errors (the same material is also covered in Programming in Scala). After seeing this, I dug up the JSR-14 spec, and saw that yes, Odersky was one of the people responsible.


* Actually, it may be possible to watch on a plane; a quick session with Firebug indicates that the videos are straightforward HTML5. I haven't looked at the code behind them, so it's possible that the quizzes are implemented in JavaScript.

** My concern is that the assignments distribute logic rather than encapsulate it. A more egregious example, but one less open to explication, is in a later assignment: defining a simple type alias to represent a list of pairs. What the simple alias does not convey, however, is that the list must remain sorted. An object-oriented approach would maintain this invariant in the class; consumers would never see/create an unsorted list. The functional approach, by comparison, requires every function that transforms the list to maintain the invariant.

Friday, August 2, 2013

Where Do I Want to Go?

TLDR: The title is a pun.

In the past year or so I've been looking at different languages, trying to gain a sense of what I might do “after Java.” My investigations have been cursory, as they must be: adopting a (bad) Matrix reference, I don't think you can truly know a language until you have fought with it. But maybe you can get a sense of whether it's worth fighting. So here are the languages I've considered, and my impressions:

Scala
Many of my colleagues believe that Scala is the best “JVM language”; we're doing several projects with it, and offer several classes in the language. I like some of the ideas that it incorporates, especially the clear delineation between mutable and immutable data (and a strong preference for the latter).

But … it's ugly. My first impression of Scala was that it had too many symbolic operators. As I kept seeing it, my impression changed: it had too many features and ideas, period. And based on discussions with my colleagues, there isn't an easy path to learning the language: you need to program idiomatically from day one.

That said, it looks like Scala will be the next language that I use professionally, probably within the next few months. Maybe my opinion will change.

Clojure
I don't get LISP, and the people who claim to get it always seem a bit too religious to me (not merely religious, but too religious). Perhaps I will be enlightened if anyone can explain exactly why macros are different from any other interpreter — or for that matter, any type of code generation. For now, however, Clojure (as the most-commercial LISP) isn't in my future.
Groovy
Why isn't Groovy more popular than it is? Of the features that I listed in my previous post, Groovy has everything except a concurrency story. It's easy to write, runs on the JVM, can interact directly with Java libraries, and offers some neat features of its own. Yet the only people who seem to use it are those who adopted Gradle as their build tool.

It was while I was trying to learn Gradle that I decided Groovy wasn't for me. To truly understand Gradle, you need to understand the Groovy approach to building DSLs by blending functions and blocks into a seamless whole. It's quite impressive once you figure it out. But, like Scala, there's no easy path to knowledge. The path from “Hello, World” to idiomatic Groovy involves a quantum leap.

And the documentation, quite frankly, sucks. It's a bunch of disconnected wiki pages that assume you know what you're looking for before you look for it. Perhaps not so bad on its own, but I believe it belies the approach to Groovy's language design: it's trying to be the new Perl.

Ruby
I've worked a bit with Ruby. It's nice, but I don't find myself wanting to devote the next section of my career to being a “Ruby developer.”
JavaScript
Does JavaScript have a future outside of the browser? I don't think so, Node.js notwithstanding. I think the biggest issue is the lack of library support: if I want a feature that's not part of the core language I need to write it myself. It's much like C++ before the STL (and while JQuery is great for interacting with the DOM, it's not a general-purpose library).

I also make far too many typos to be a happy JavaScript programmer. Making this worse, most of the frameworks that I've used swallow exceptions.

Erlang
Erlang is a strange language; it's very obvious that it started life as a rules engine. It has some extremely interesting features, such as list comprehensions that are closer to a database query than anything you'll find elsewhere. And most important, it has a notion of “shared nothing,” message-passing concurrency that I like.

But … it's a strange language. Actually, the way that I often describe it is “primitive,” designed to solve the problems of the 1980s. There's heavy emphasis on buffer manipulation, while strings are treated as lists of (ISO8859-1!) characters. I'm seeing ever more projects that use it, but I think there are better alternatives.

Python
Python is a beautiful language: every time I get to use it, I smile. Unfortunately, as far as I can tell (based on not-so-regular attendance at the local Python Users' Group), it's confined to scripting and “need it now and not tomorrow” programs. It also doesn't have a good (to me) concurrency story.
Go
Go has many of the features that I consider important, and it has an impressive pedigree (Kernighan and Pike). It does have some strange quirks, which will be the topic of future posts. And it's a young language, still evolving; code written today may need to be rewritten tomorrow.

That said, Go seems to make the most sense for my future. Like Java, it was a language created for a clear purpose (concurrent back-end applications), and I happen to think that the future mainstream will be concurrent. Better then to have a language that was designed for that purpose, rather than one that has concurrency bolted on.

Coming up: my adventures in learning Go.

Monday, July 15, 2013

Signs that Java is Fading

My website is losing traffic. It never had terribly high traffic, other than the days it was mentioned on Reddit or Hacker News, but in the last year that traffic has been cut in half. There are no doubt many reasons for this, but I'm going to take a dramatic interpretation: it's another indication that the time of Java is passing.

Bold words for someone who only gets a few hundred hits a day, but the character of those hits are changing. The two most-hit pages on my site have consistently been those on bytebuffers and reference objects: topics that are of interest to people doing “interesting” things, and not something that you'd use in a typical corporate web-app. Recently, these two have been eclipsed by articles on parsing XML and debugging out-of-memory errors.

This sense that Java isn't being used for “interesting” projects comes from other sources as well. Colleagues and friends are looking into other languages, some on the JVM and some not. One of the best recruiters I know, a long-time specialist in Java (and founder of the Philadelphia Java Users Group) is now emphasizing other languages on his home page.

Of course, Java isn't going to disappear any time soon. There are billions of dollars of sunk costs in Java software, and it needs to be maintained. And with hundreds of thousands of Java programmers in the job market, employers know that they can always staff their projects, old and new. Corporate inertia is all but impossible to overcome, as evidenced by the fact that you still see plenty of COBOL positions open.

Java is occasionally called “the COBOL of the 21st century,” and I've never found the comparison apt. But the position of Java today seems to be similar to that of COBOL in the 1980s: a huge installed base, huge demand for developers, but slotted into a specific application area. If you wanted to venture out of the accounting realm, you looked for something else.

Yesterday was my birthday. This upcoming January will mark 30 years that I've received a regular paycheck for making computers do what other people want. In that time, I've switched areas of expertise several times. I figure that I'm going to be in this business for another 15 to 20 years, and I want to be doing something interesting for those years. So it's time to re-evaluate Java, and think about what my next area of expertise should be.

Monday, July 6, 2009

The Resume

It's time to update my resume again.

It's starting to get too long: I've never given much thought to keeping it at a single page, but the HTML version is now at five printed pages, covering 17 years. One solution is to drop older entries, but I'd really like to keep my Fidelity experience listed: it was a major part of my career, and shows I've been something other than an individual contributor.

There's also the matter of the number of entries. When I first entered the professional workspace, my recruiter told me that the average job span was two years. I've kept pretty close to that, at least as a full-time employee, although my contract jobs tended to be a year or less. However, a bunch of short gigs raise red flags, even when listed as “contract.” That's always been a peeve of mine, since the sign of an advancing career within the same company is a new position every couple of years. I simply do it with different companies.

The skills section doesn't need too much editing. I've always taken the attitude that I should be able to stand up to an experienced person quizzing me on any skill that I've listed, and when sitting on the other side of the interview table, I tend to do just that. The database section needs some work — Teradata? what's Teradata? Actually, it's been long enough that I don't think I can call myself a “database programmer” anymore, so maybe I should just merge it into another category.

And then there's the readability issue, in particular the issue of whether I'm writing for a human or a search engine. Search engines have been good to me: I've seen many hits in my access log from people using them, and gotten more than a few follow-up emails (which is impressive, given how I don't provide an easy email link on my website). But ultimately, it's a person who's reading, who can get bored, who can throw it in the trash based on font choice or layout. HTML has some decided negatives here: no matter how much I tweak the CSS, I'm not going to match the layout that I had with Word.

One thing that I don't plan to change is the structure of the entries: early on I decided that my resume would speak to what I've done, rather than how I've done it. That causes problems with a lot of recruiters: they want to see buzzwords everywhere, on the belief that hiring managers want the same. And perhaps some hiring managers do want that, but I've decided that I don't really want to work for those managers. I'd rather work for someone who wants the same things I do.