blog.kdgregory.com

Saturday, July 4, 2020

Packaging Java for AWS Lambda

When I want to package a Java application in a single JAR with all of its dependencies, I normally turn to Maven’s Shade plugin. This plugin builds an “UberJAR” by unpacking all of the files from the project's dependencies and repackaging them into a single JAR.

The Shade plugin is also used by the Lambda Developer's Guide. I was surprised, then to see this later in the same guide:

Reduce the time it takes Lambda to unpack deployment packages authored in Java by putting your dependency .jar files in a separate /lib directory. This is faster than putting all your function’s code in a single jar with a large number of .class files.

I was surprised because a single JAR containing class files should be the most efficient way to deploy a project. The JVM memory-maps the JAR files on its classpath, which means that it can access arbitrary parts of those files without an explicit call to the OS kernel. And UberJAR means that the JVM only has to examine a single directory structure.

After some investigation, I discovered the reason: Lamdba unpacks the contents of the deployment package into its filesystem. Which means that an UberJAR turns into a lot of little files, each of which must be read using several kernel calls.

OK, so if you’re a Maven user, how do you package your Lambdas in the prefered way? The answer is that instead of using the Shade plugin, you use the Assembly plugin. The stated goal of this plugin is to allow developers to package their project artifact with related information such as a website. But it's a very flexible plugin, using a “deployment descriptor” to specify which files should be included in the final artifact, and where they should go.

This is the descriptor that I used for my Java Lambda example. It's duplicated for each of the Lambdas, and is found at src/assembly/deployment.xml:

<assembly xmlns="http://maven.apache.org/ASSEMBLY/2.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/ASSEMBLY/2.0.0 http://maven.apache.org/xsd/assembly-2.0.0.xsd">
    <id>deployment</id>
    <includeBaseDirectory>false</includeBaseDirectory>
    <formats>
        <format>zip</format>
    </formats>
    <dependencySets>
        <dependencySet>
            <outputDirectory>/lib</outputDirectory>
            <useProjectArtifact>true</useProjectArtifact>
            <unpack>false</unpack>
            <scope>runtime</scope>
        </dependencySet>
    </dependencySets>
</assembly>

The descriptor is simple: it says that every JAR used by the Lambda — including the one built by the project itself — should be stored in the /lib directory of the final artifact. To use it, I replace the Shade configuration in my POM with this (the plugin.assembly.version property defined as 3.2.0):

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-assembly-plugin</artifactId>
    <version>${plugin.assembly.version}</version>
    <configuration>
        <descriptors>
            <descriptor>src/assembly/deployment.xml</descriptor>
        </descriptors>
    </configuration>
    <executions>
        <execution>
            <phase>package</phase>
            <goals>
                <goal>single</goal>
            </goals>
        </execution>
    </executions>
</plugin>

And that's it: when I run mvn package I get a file in my target directory with the suffix -deployment.zip.

Sunday, June 21, 2020

A History of Java Logging Frameworks, or, Why Commons-Logging is Still So Common

In the beginning there was System.out.println(). Or, for those of us who were purists (or wanted our output immediately), System.err.println(). And every developer had his or her own preferences for how their log messages would appear, leading to some very messy output as the number of developers on a team increased.

To bring order to this chaos, many projects implemented their own logging framework. I worked on one such project in the late 90s: the framework consisted of a single class that implemented a static log() method. I can't remember what the logging output looked like, but I suspect that it included a consistently-formatted timestamp.

According to this article by Ceki Gülcü, a project that he was working on in 1996 also implemented their own logging framework. But unlike the project I worked on, its framework was released to the public in 1999 as log4j.

Something else that became public in 1999 was the Jakarta project, a collaboration between Sun and the Apache Software Foundation to produce the Tomcat application server. And of course Tomcat, being a large application with contributions by many people, had its own logging framework (and it till does, although the implementation and purpose has changed over time).

And lastly, 1999 was also the year that JSR 047, the Java Logging API Specification, was released. It turned into the java.util.logging (JUL) package, released as part of JDK 1.4 in 2002.

A plethora of logging frameworks isn't a problem if you're an application developer: you pick one and stick with it. If you're developing a library that might be used by those applications, however, it's a nightmare. If your library uses Log4J and the application uses JUL, then the output becomes a mess and the developers using your library complain.

At the time, the Jakarta project was arguably the largest single producer of libraries for Java application developers, so they added another: Jakarta Commons Logging (since renamed to Apache Commons Logging, but you'll still see the initials "JCL" in the documentation). The idea of Commons Logging was that you would write your code against the JCL API, add the JCL JAR to your dependencies, and it would figure out what actual logging framework you were using.

Although Commons Logging was intended for libraries, application developers adopted it as well. I can't speak for anyone else, but I looked at it as “won't hurt, and means I don't need to keep track of multiple APIs.” Unfortunately, some developers discovered that it could hurt: they were using Tomcat, regularly redeploying their applications, and had memory leaks that would eventually cause Tomcat to stop working.

Looking back, it appears that these leaks were due to missing transitive dependencies in the deployment bundle.* This took place in the days before Maven 2, when developers were responsible for identifying every JAR that went into their application, and ensuring that it somehow got there (which often meant lots of JARs checked into source control). It wouldn't be obvious that a library used Commons Logging, so the application developer wouldn't bother to add it to the deployed WAR. Unfortunately, Tomcat made it available on the system classpath (because it used Commons Logging internally), so the developers never knew they were missing the JAR. And since Commons Logging needed to know about the actual deployed logging framework, it would establish a strong reference to the Log4J implementation that was in the WAR, preventing the classloader from unloading the classes belonging to the WAR.

That problem was rather quickly resolved: Commons Logging version 1.1 was released in 2006, Tomcat 6 moved it off the public classpath (although Tomcat 5 remained “in the wild” for quite some time), and Maven 2 ensured that a WAR would contain all of the dependencies that it needed. But developers have very long memories for things that go wrong, especially things that happened to someone else who blogged about it.**

At the same time, several popular Java frameworks appeared; Hibernate in 2001 and Spring in 2002 are two of the most familiar. These frameworks were complex enough to need logging, but for obvious reasons wouldn't want to be tied to a specific implementation. Commons Logging provided that capability (and thus became an untracked dependency for many builds).

Moving forward, the new millennium saw continued change in the logging world. Log4J became an Apache project. Ceki Gülcü left Apache and developed Logback and SLF4J. And in the early 2010s, the remaining Apache Log4J committers decided that the Log4J 1.x implementation couldn't be extended and completely rewrote it as Log4J 2.x.

Of these, SLF4J is particularly interesting because it was a logging facade, in direct competition with Commons Logging. Unlike Commons Logging, which tried to infer what underlying framework you were using, SLF4J required you to explicitly include “bridge” JARs for your actual implementation. SLF4J also provided additional features, such as formatted log messages, that were very attractive to application developers.

However, adopting SLF4J had its own pain point: if you used Spring (or Hibernate, or any other library that dependent on Commons Logging), Maven would add it to your build as a transitive dependency. Where it might take precedence over the “slf4j-jcl” bridge from SLF4J (it all depended on the order that JARs were given to the classloader). A key feature of Maven POMs from this era are multiple <exclusions> to prevent such transitive dependencies.

So here we are in 2020, and the logging frameworks scene is more complex than ever:

  • Log4J 1.x is still used by many projects, even though it was officially end-of-lifed in 2015. One of its most useful features doesn't work under Java 9 (and, I presume, later versions), so its popularity may fade (although it seems that many people, particularly those using OpenJDK, are quite happy with Java 8).
  • SLF4J/Logback is still used by many developers (including myself), even though new releases seem to have stalled at the 1.3.0-alpha stage (after 20 years of writing logging frameworks, I'm guessing Ceki is in need of a break).
  • Log4J 2.x provides “bridge” JARs that let people use Commons Logging and SLF4J as their API, with Log4J2 as the back-end.
  • Commons Logging still exists, but hasn't seen a release since 2014. Nor has its list of supported frameworks changed: Log4J 1.x, JUL, and Avalon LogKit.

Perhaps counter-intuitively, even with all of these changes, Commons Logging is still used by many libraries. However, it's become less visible. Spring Framework, for example, implements the API internally; as an application developer, you no longer need to explicitly exclude the JAR. And if you use Spring Boot, its a 3,000+ line dependency-management POM will explicitly exclude Commons Logging from the libraries that use it.

And if you're developing a library, I think that Commons Logging is still the best choice for internal logging. It provides a consistent interface, and it's reasonable to expect that the consumers of your library already have the bridge JARs that they need (which might mean the internal implementation in Spring Framework). But there are a few best practices to keep your users from cursing you:

  • Mark your dependency as provided. This tells Maven (or Gradle, or any of the other tools that follow the Maven standard) not to resolve the transitive dependency; it will rely instead on an explicitly-referenced JAR to provide the necessary classes.
  • Ensure that you don't establish a transitive dependency via a package that you depend on, like HTTP Components. Take the time to look at your entire dependency tree (using mvn dependency:tree or equivalent), and add an exclusion if anything tries to pull in Commons Logging.
  • Don't implement your own logging facade. It's tempting, I know: you want to protect the people that haven't configured logging into their application. And it seems easy: two classes (a log factory and a logger), with some reflection to pick an appropriate back end. But there's a lot of room for error. If you get it wrong, you'll introduce subtle bugs and performance issues, and your experienced users will curse you (and look for an alternative library). And if you get it right, you'll find that you've re-implemented Commons Logging. And good or bad, it won't actually help inexperienced users: they should learn to use logging rather than deploy a black box and cross their fingers.

Bottom line: if you're writing an application, use whatever logging framework you want. I have a strong preference for SLF4J/Logback, but Log4J 2.x does have some features that it's missing. However, if you're implementing a library, stick with Commons Logging. In almost every case it will Just Work™.


* Open Source Archaeology can be a fascinating experience. It's next to impossible to find a source-code repository that gives a full history for older projects: they typically started on CVS, then moved to SVN, and are now on GIT. In some cases, moving between hosting providers (although it looks like Hibernate is still actively maintained on SourceForge, which makes me feel better about a couple of my older projects). Each move lost information such as version tags (assuming they ever existed).

Maven Central is also less helpful than expected, because many projects changed their group or artifact IDs over their lifetime (plus, who wants to dig through 55 pages of org.springframework JAR listings). And a lot of older versions are “retconned”: they were imported into the repository long after release, with a made-up, minimal POM.

Fortunately, most of the larger projects maintain their own archives, so you can download a particular version and unpack it to see what's there. And if you're looking for dependencies, you can pick a likely class and run javap -c to disassemble it and then look at variable declarations. It's a lot of work, and some might call it obsessive. But that's how I learned that Spring Framework 1.0 didn't log at all, while 1.2 used Commons Logging.

** They're also very vocal about it. I wrote an article on logging techniques in 2008 that referenced Commons Logging (because in my world it wasn't an issue), and constantly received comments telling me that it was horrible and should never be used. I eventually updated the article to reference SLF4J, primarily because it was focused at application logging, but also to keep the critics quiet.

Thursday, April 9, 2020

Stupid Git Tricks: Interactive Rebase

I like to provide a nice, clean history in my pull requests. Reviewers should be able to follow each commit, and see how the functionality is built up. No extranneous commits. Nothing out of order. Everything looking like one smooth path from idea to implementation.

Unfortunately, my development process doesn't quite work that way. For one thing, I commit (and push) frequently — as in every 10-15 minutes when I'm making lots of changes. For another, I'll often realize that there's a small change that should have been introduced several commits previously. For these, and other reasons, I find git rebase -i invaluable.

OK, some of you are probably outraged: “you're changing history!” Settle down. This is for development branches, not master. And I'm willing to adapt in a team setting: if my team members want to see messy commit histories in a pull request, I'm OK with giving that to them. But only if they squash merges.

So, here are a few of the ways that I change history. You're free to avoid them.

Combining commits

Here's one morning's commit history:

commit 6aefd6989ba7712cb047d661b68d34c888badea4 (HEAD -> dev-writing_log4j2, origin/dev-writing_log4j2)
Author: Me 
Date:   Sun Apr 5 12:13:19 2020 -0400

    checkpoint: content updates

...

commit e8503f01c72618709ac5231a78cfa8549fcfb7b3
Author: Me 
Date:   Sun Apr 5 09:22:51 2020 -0400

    checkpoint: content updates

commit 8bdb788421c56cb0defe73ce87b9e1ffe4266b0c
Author: Me 
Date:   Sat Apr 4 13:57:27 2020 -0400

    add reference to sample project

Three hours of changes, split up over eight commits, with regular pushes so I wouldn't lose work if my SSD picked today to fail. I really don't want to see all of those in my history.

The solution is to squash those commits down using an interactive rebase:

git rebase -i 8bdb788421c56cb0defe73ce87b9e1ffe4266b0c

When I run this, it starts my editor and shows me the following:

pick e8503f0 checkpoint: content updates
pick f71ddca checkpoint: content updates
pick a8d7a25 checkpoint: content updates
pick 6b87b9b checkpoint: content updates
pick 556a346 checkpoint: content updates
pick 466dd26 checkpoint: content updates
pick 0034657 checkpoint: content updates
pick 6aefd69 checkpoint: content updates

# Rebase 8bdb788..6aefd69 onto 8bdb788 (8 commands)
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
# d, drop = remove commit
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

A list of commits, instructions on how to work with them, and a few warnings about what happens if I do something dumb. To squash these commits I update all but the first to be a “fixup”:

pick e8503f0 checkpoint: content updates
f f71ddca checkpoint: content updates
f a8d7a25 checkpoint: content updates
f 6b87b9b checkpoint: content updates
f 556a346 checkpoint: content updates
f 466dd26 checkpoint: content updates
f 0034657 checkpoint: content updates
f 6aefd69 checkpoint: content updates

Save this and exit the editor, and Git applies all of those changes:

Successfully rebased and updated refs/heads/dev-writing_log4j2.

And now when I look at my history, this is what I see:

commit 51f5130422b524603d6249ef40e012aeecde5422 (HEAD -> dev-writing_log4j2)
Author: Me 
Date:   Sun Apr 5 09:22:51 2020 -0400

    checkpoint: content updates

commit 8bdb788421c56cb0defe73ce87b9e1ffe4266b0c
Author: Me 
Date:   Sat Apr 4 13:57:27 2020 -0400

    add reference to sample project

Note that the last commit hash has changed, and my working HEAD no longer refers to the origin branch. This means that I'm going to need to force-push these changes. But before that, there's one more thing that I want to do:

git commit --amend -m "content updates" --reset-author

This command does two things. First, it updates my commit message: this is no longer a “checkpoint” commit. The second thing it does is update the basic commit info, in this case just the timestamp. If you looked closely at the history above, you saw that all of the commits had been marked with the timestamp of the first; --reset-author makes the history more closely reflect what actually happened (it can also be used to pretend that other people didn't contribute to the commit, but I'll assume you're more honorable than that).

Now the log looks like this:

commit fdef5d6f0a19218784b87a596322816347db2232 (HEAD -> dev-writing_log4j2)
Author: Me 
Date:   Sun Apr 5 12:22:46 2020 -0400

    content updates

commit 8bdb788421c56cb0defe73ce87b9e1ffe4266b0c
Author: Me 
Date:   Sat Apr 4 13:57:27 2020 -0400

    add reference to sample project

Which is what I want to see, so time to force-push and overwrite the previous chain of commits:

> git push -f
Counting objects: 4, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 4.52 KiB | 2.26 MiB/s, done.
Total 4 (delta 3), reused 0 (delta 0)
To ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/Website
 + 6aefd69...fdef5d6 dev-writing_log4j2 -> dev-writing_log4j2 (forced update)

I should note here that the previous chain of commits still exists in your repository. If, for some reason, you want to retrieve them, you can explicitly check-out the former head commit:

git checkout -b recovery 6aefd6989ba7712cb047d661b68d34c888badea4

Of course, if you close your terminal window, you might not find that commit hash again, so if you're worried you should write it down it somewhere. When I'm making a large set of changes, I'll often create a temporary branch from the one that's being rebased, just in case (unfortunately, I often forget to switch back to the branch that I want to rebase).

Re-ordering commits

Especially at the start of a new project, I might jump around and edit several different things, resulting in a messy commit history:

pick fc46b5b update docs
pick 0f734fb minor change to feature X
pick 2233c01 update docs
pick fe56f59 another change to feature X
pick d3fb025 related change to feature Y
pick aec87c1 update docs
pick 66ef266 something unrelated to either X or Y
pick 96179b3 changing Y
pick 904a779 update docs

Interactive rebase allows you to move commits around, and then optionally squash those moved commits:

pick fc46b5b update docs
f 2233c01 update docs
f aec87c1 update docs
f 904a779 update docs
pick 0f734fb minor change to feature X
f fe56f59 another change to feature X
pick d3fb025 related change to feature Y
f 96179b3 changing Y
pick 66ef266 something unrelated to either X or Y

There are a couple of gotchas when you do this. First, you need to make sure that you're not changing both X and Y in the same commit. If you do, you can still squash the commits together, but it's pointless to try to track the work in each feature separately.

Second, make sure that you preserve order: in my example, commit 0f734fb happened before fe56f59 the interactive rebase needs to keep them in this order. If you don't, you can end up with merge conflicts that are challenging to resolve.

Lastly, and most important, make sure you have the same number of commits that you started with. If you accidentally delete a commit rather than move it, you will lose that work. For this reason, I tend to use interactive rebase on small pieces of my history, perhaps making several passes over it.

Editing commits

When writing my article about Log4J2 appenders, I saw a comment that I wanted to change in the accompanying example code. Unfortunately, it wasn't the HEAD commit:

commit 8007214ef232abf528baf2968162b51dcd2c09ca
Author: Me 
Date:   Sat Apr 4 09:34:53 2020 -0400

    README

commit 38c610db6a02747d7017dff0a9c2b7ed290e30e1
Author: Me 
Date:   Sat Apr 4 08:34:12 2020 -0400

    stage-10: add tests

commit 5dfd79e3f879038e915fa04c83f8eb9b0f695e35
Author: Me 
Date:   Tue Mar 31 08:38:17 2020 -0400

    stage-9: implement a lookup

There are two ways that I could have approached this. The first would be to create a new commit and then reorder it and turn it into a fixup. The second is to edit the file as part of an interactive rebase, by marking the commit with an "e":

pick 5dfd79e stage-9: implement a lookup
e 38c610d stage-10: add tests
pick 8007214 README

When I do this, git works through the commits, and stops when it reaches the marked one:

Stopped at 38c610d...  stage-10: add tests
You can amend the commit now, with

  git commit --amend 

Once you are satisfied with your changes, run

  git rebase --continue

I can now edit any files in my working tree (they don't have to be part of the original commit). Once I'm done, I do a git add for changed files, followed by both git commit --amend and git rebase --continue. After the rebase completes, I can force-push the changes.

Beware that editing commits can introduce merge conflicts: if a later commit touches the same code, you'll have to stop and resolve the conflict. This is more likely when you edit early commits, or when the edits are wide-ranging. It is far less likely for changes like comments.

Cherry-picking into the middle of a branch

You may be familiar with git cherry-pick, which takes an arbitrary commit and puts it at the HEAD of your current branch. This can be useful when two teams are working on the same general area of the codebase: often one team will incidentally do something that the other team finds valuable.

Interactive rebase is like cherry-picking on steroids: you can insert a commit anywhere in your commit tree. To be honest, I find this more risky than beneficial; instead I would cherry-pick to HEAD and then perhaps use an interactive rebase to move the commit to where it “belongs.” But in the interest of “stupid git tricks,” here we go.

Let's say that you've been working on a branch and have been making changes, starting with changes to the build scripts. Then you talk with a colleague, and learn that she has also made changes to the build. You could cherry-pick her change to the end of your branch and use it moving forward, but you're somewhat OCD and want to keep the build changes together. So you fire up git rebase -i and add your colleague's commit as a new “pick”:

pick ffc954d build scripts
p 1438a13d11d6001de876a034f434a050c09b587d
pick b497403 update 1
pick 18e8415 update 2
pick 33a4e9d update 3

Now when you do a git log, you see something like this:

... skipping two commits

commit 7b62acb8d9100f379a0d43e3227c36ae91c1edd9
Author: Me 
Date:   Fri Mar 27 10:11:01 2020 -0400

    update 1

commit c579ed88403354faed83213da63d4546c5aa13b5
Author: Someone Else 
Date:   Sun Jan 5 09:30:14 2020 -0500

    some build changes

commit ffc954dc41555282ece3e2b7a0197472c0af9f11
Author: Me 
Date:   Mon Jan 6 08:02:30 2020 -0500

    build scripts

Note that the commit hash has changed: from 1438a13d to c579ed88. This is because it's now part of a new branch: a commit hash is based not merely on the content of the commit, but also the commit chain that it's a part of. However, the committer's name and the commit date are unchanged.

Wrapping up: a plea for clean commit histories

A standard Git merge, by preserving the chains of commits that led to the merge point, is both incredibly useful and incredibly annoying. It's useful, in that you can move along the original branch to understand the context of a commit. It's annoying, in that your first view of the log shows the commits intermingled and ordered by date, completely removing context.

I find messy histories to be similar. Software development doesn't happen in a clean, orderly fashion: developers often attach a problem from multiple sides at once. And that can result in commit histories that jump around: instead of “fix foo” you have “add test for bar”, followed by “make test work”, followed by an endless string of “no, really, this time everything works”.

Maybe you find that informative. If not, do your future self (and your coworkers) a favor and clean it up before making it part of the permanent record.