blog.kdgregory.com

Tuesday, January 8, 2019

Multi-threaded Programming with AWS Lambda

Here's a simple multi-threaded program. Note that I use System.err to write messages rather than System.out: the latter is buffered, so that your output doesn't appear immediately; the former flushes output with every call.

public class MultithreadExperiment
{
    public static void main(String[] argv) throws Exception {
        new MultithreadExperiment().handler(new HashMap(), null);
    }

    public void handler(Map ignored, Context lambdaContext) {
        System.err.println("handler starting");
        new Thread(new BackgroundOperation()).start();
        System.err.println("handler finishing");
    }

    private static class BackgroundOperation implements Runnable {
        @Override
        public void run() {
            System.err.println("background thread starting: " + Thread.currentThread().getName());
            for (int ii = 0 ; ii < 10 ; ii++) {
                System.err.println("background thread running: " + Thread.currentThread().getName() + " at " + new Date());
                try {
                    Thread.sleep(1000);
                }
                catch (InterruptedException ex) {
                    System.err.println("background thread interrupted: " + Thread.currentThread().getName() + " at " + new Date());
                }
            }
        }
    }
}

When you run the program it spins up a background thread, which writes out a message every second. Since new threads are non-daemon by default, the program won't exit until this thread is finished. As a result, this is what you'll see for output:

handler starting
handler finishing
background thread starting: Thread-0
background thread running: Thread-0 at Mon Jan 07 18:18:59 EST 2019
background thread running: Thread-0 at Mon Jan 07 18:19:00 EST 2019
background thread running: Thread-0 at Mon Jan 07 18:19:01 EST 2019
background thread running: Thread-0 at Mon Jan 07 18:19:02 EST 2019
background thread running: Thread-0 at Mon Jan 07 18:19:03 EST 2019
background thread running: Thread-0 at Mon Jan 07 18:19:04 EST 2019
background thread running: Thread-0 at Mon Jan 07 18:19:05 EST 2019
background thread running: Thread-0 at Mon Jan 07 18:19:06 EST 2019
background thread running: Thread-0 at Mon Jan 07 18:19:07 EST 2019
background thread running: Thread-0 at Mon Jan 07 18:19:08 EST 2019

If we upload this program to AWS Lambda and run a test invocation, however, you see something completely different. Here's the output from CloudWatch Logs after the first invocation:

2019-01-08T00:15:39.957Z START RequestId: 87232aba-12da-11e9-8390-ed48849189e3 Version: $LATEST
2019-01-08T00:15:39.993Z handler starting
2019-01-08T00:15:39.993Z handler finishing
2019-01-08T00:15:39.994Z END RequestId: 87232aba-12da-11e9-8390-ed48849189e3
2019-01-08T00:15:39.994Z REPORT RequestId: 87232aba-12da-11e9-8390-ed48849189e3 Duration: 36.66 ms Billed Duration: 100 ms  Memory Size: 512 MB Max Memory Used: 42 MB 
No sign of the background thread. Let's invoke it again after a short wait:
2019-01-08T00:16:22.941Z START RequestId: a14450e0-12da-11e9-b18c-ed59274c4c08 Version: $LATEST
2019-01-08T00:16:22.944Z background thread starting: Thread-0
2019-01-08T00:16:22.949Z handler starting
2019-01-08T00:16:22.950Z background thread running: Thread-0 at Tue Jan 08 00:16:22 UTC 2019
2019-01-08T00:16:22.950Z handler finishing
2019-01-08T00:16:22.950Z background thread starting: Thread-1
2019-01-08T00:16:22.950Z background thread running: Thread-1 at Tue Jan 08 00:16:22 UTC 2019
2019-01-08T00:16:22.950Z END RequestId: a14450e0-12da-11e9-b18c-ed59274c4c08
2019-01-08T00:16:22.950Z REPORT RequestId: a14450e0-12da-11e9-b18c-ed59274c4c08 Duration: 2.92 ms Billed Duration: 100 ms  Memory Size: 512 MB Max Memory Used: 42 MB 

Here we see the output from two threads: “Thread-0” was presumably started by the first invocation, while ”Thread-1” was started by the current invocation. Invoking it again:

2019-01-08T00:17:18.950Z START RequestId: c296963d-12da-11e9-855a-d106c9ec09c7 Version: $LATEST
2019-01-08T00:17:18.950Z background thread running: Thread-0 at Tue Jan 08 00:17:18 UTC 2019
2019-01-08T00:17:18.950Z background thread running: Thread-1 at Tue Jan 08 00:17:18 UTC 2019
2019-01-08T00:17:18.968Z handler starting
2019-01-08T00:17:18.969Z handler finishing
2019-01-08T00:17:18.969Z background thread starting: Thread-2
2019-01-08T00:17:18.969Z background thread running: Thread-2 at Tue Jan 08 00:17:18 UTC 2019
2019-01-08T00:17:18.969Z END RequestId: c296963d-12da-11e9-855a-d106c9ec09c7
2019-01-08T00:17:18.969Z REPORT RequestId: c296963d-12da-11e9-855a-d106c9ec09c7 Duration: 15.47 ms Billed Duration: 100 ms  Memory Size: 512 MB Max Memory Used: 42 MB 

We've got another thread starting, and output from the threads started by the previous invocations. What's really interesting is that the message from those threads shows the current time, which is nearly a minute later than the previous incoation. Clearly we're not sleeping for only a single second. So what's happening?

The answer can be found in the documentation:

After a Lambda function is executed, AWS Lambda maintains the execution context for some time in anticipation of another Lambda function invocation. In effect, the service freezes the execution context after a Lambda function completes, and thaws the context for reuse, if AWS Lambda chooses to reuse the context [...]

While the documentation doesn't go into details, I think that it's safe to assume that Lambda uses the “cgroups freezer,” just like docker pause. The process still exists on the host system, but is completely suspended. When later unpaused, the background thread, like Rip Van Winkle, thinks that it's only been sleeping for a second.

But that doesn't completely explain why output from the first invocation appears in the logs for the second invocation. The answer for that is simple: it didn't have a chance to run.

Starting a thread is fast, but it's not instant. In my experiments with Lambda, it typically took between 0.1 and 0.150 milliseconds — although it occasionally took much longer. The handler function in this example only does one thing after starting the thread. True, it's a kernel call, with IO, but is probably less expensive than a thread start (indeed, in my experiments it took around 0.075 millis). Sometimes you'll get lucky, as shown in the second and third examples; sometimes (especially on cold start) you won't.

On the other hand, there's plenty of time for background threads to run when a lamba starts: as you can see from the log output, 10 to 30 milliseconds depending on whether it's a cold start or not. So in the second and third invocations, that's where you see the output from the threads started in previous invocations.

So what's the practical impact?

It means that you can't just “fire and forget” a background task in Lambda, like you can with a normal Java application. You have to use Thread.join() or some other coordination mechanism to ensure that the background task finishes before the handler function returns. If you don't, that task may never execute: there's no guarantee that Lambda container that was running it will ever be invoked again.

And if you're using a long-running background worker, such as a logging library that batches messages to reduce the number of web-service calls it makes? You (well, I) need to find a way to make those calls synchronous.

Thursday, December 27, 2018

log4j-aws-appenders now supports Logback

I started my Log4J appenders project because I wasn't happy with how the AWS CloudWatch agent broke apart logfiles. It seemed, as I said in the FAQ, like it would be an easy weekend project. That was nearly two years ago.

In the interim, I added support for Kinesis Streams (a year ago) to support search-engine-based centralized logging using AWS managed Elasticsearch. Surprisingly, it wasn't until after that implementation effort that I truly “bought into” the benefits of using a search engine to examine logs. Now I can't imagine going back to grep.

After giving a talk on centralized logging to the local Java users' group, some of the feedback that I got was “it's nice, but we're not using Log4J 1.x.” So in the early fall I started to break the library into pieces: a front-end that's tied to a particular logging framework, and a back-end that handles communication with AWS. This turned out to be quite easy, which I think means that I had a good design to start with.

Then it was a matter of picking another logging framework, and learning enough about it to be able to implement appenders. I picked Logback because it's the default logging framework for Spring, and because it's the native back-end for SLF4J (which I've been using with Log4J for around five years now).

One of the interesting things that came out of this work is that I now see a good use case for multiple inheritance. There's an enormous amount of duplicated code because each appender has two is-a relationships: one to the logging framework and another to the back end. It would be nice if Java had something like Scala traits, where each trait would encapsulate one of the is-a relationships, and the appender would just be a combination of traits. On the other hand, I've seen enough ugly code using traits that I still think Gosling et al made the right decision.

Log4J 2.x is up next, although I think I'm going to take a break for a few months. I have several other projects that have taken a back seat, including a library of AWS utility classes that I started a year ago and hasn't seen its first release.

Happy Holidays!

Sunday, December 16, 2018

Database Connection Pools Need To Evolve

I never thought very deeply about connection pools, other than as a good example of how to use phantom references. Most of the projects that I worked on had already picked a pool, or defaulted their choice to whatever framework they were using. It is, after all, a rather mundane component, usually hidden behind the object relational manager that you use to interact with your database.

Then I answered this Stack Overflow question. And, after playing around a bit and writing a much longer answer, I realized that connection pools — even the newest — have been completely left behind by the current best practices in database management. Two of those practices, in particular:

  • Database credentials should be handled with care

    All of the pools that I've used expect you to provide database credentials in their configuration. Which means that you, as a security-conscious developer, need to retrieve those credentials from somewhere and manually configure the pool. Or, as a not-so-security-conscious developer, store them in plain text in a configuration file. In either case, you're doing this once, when the application starts. If there's ever a need to change credentials, you restart your application.

    That makes it difficult to practice credential rotation, where your database credentials change on a frequent basis to minimize the impact of losing those credentials. At the extreme, Amazon's RDS databases support generation of credentials that last only 15 minutes. But even if you rotate credentials on a monthly basis, the need to restart all of your applications turns this simple practice into a major undertaking, almost certainly manual, and one that may require application downtime.

  • Failover isn't a big deal

    Failover from primary database to replica has traditionally been a Big Event, performed manually, and often involving several people on a conference call. At a minimum you need to bring up a new replaca, and with asynchronous, log-based replication there is always the chance of lost data. But with modern cluster-based database servers like Amazon Aurora, failover might happen during unattended maintenance. If the application can't recognize that it's in the middle of a short-duration failover, that's means it's still a Big Deal.

One solution to both of these problems is for the pool to provide hooks into its behavior: points where it calls out to user code to get the information it needs. For example, rather than read a static configuration variable for username and password, it would call a user-defined function for these values.

And that's fine, but it made me realize something: the real problem is that current connection pools try to do two things. The first is managing the pool: creating connections, holding them until needed, handing them out, collecting them again when the application code forgets to close them, and trying to do all of this with the minimal amount of overhead. The second task is establishing a connection, which has subtle variations depending on the database in use.

I think that the next evolution in connection pools is to separate those behaviors, and turn the connection management code into a composable stack of behaviors that can be mixed and matched as needed. One person might need a MySQL connection factory that uses an IAM credentials provider and a post-connection check that throws if the database is read-only. Another person might want a Postgres connection factory that uses an environment-based credentials provider and is fine with a basic connection check.

The tricky part is deciding on the right interfaces. I don't know that the three-part stack that I just described is the right way to go. I'm pretty sure that overriding javax.sql.DataSource isn't. And after creating an API, there's the hard work of implementing it, although I think that you'll find a lot of people willing to contribute small pieces.

The real question: is there a connection pool maintainer out there who thinks the same way and is willing to break her pool into pieces?