Sunday, December 16, 2018

Database Connection Pools Need To Evolve

I never thought very deeply about connection pools, other than as a good example of how to use phantom references. Most of the projects that I worked on had already picked a pool, or defaulted their choice to whatever framework they were using. It is, after all, a rather mundane component, usually hidden behind the object relational manager that you use to interact with your database.

Then I answered this Stack Overflow question. And, after playing around a bit and writing a much longer answer, I realized that connection pools — even the newest — have been completely left behind by the current best practices in database management. Two of those practices, in particular:

  • Database credentials should be handled with care

    All of the pools that I've used expect you to provide database credentials in their configuration. Which means that you, as a security-conscious developer, need to retrieve those credentials from somewhere and manually configure the pool. Or, as a not-so-security-conscious developer, store them in plain text in a configuration file. In either case, you're doing this once, when the application starts. If there's ever a need to change credentials, you restart your application.

    That makes it difficult to practice credential rotation, where your database credentials change on a frequent basis to minimize the impact of losing those credentials. At the extreme, Amazon's RDS databases support generation of credentials that last only 15 minutes. But even if you rotate credentials on a monthly basis, the need to restart all of your applications turns this simple practice into a major undertaking, almost certainly manual, and one that may require application downtime.

  • Failover isn't a big deal

    Failover from primary database to replica has traditionally been a Big Event, performed manually, and often involving several people on a conference call. At a minimum you need to bring up a new replaca, and with asynchronous, log-based replication there is always the chance of lost data. But with modern cluster-based database servers like Amazon Aurora, failover might happen during unattended maintenance. If the application can't recognize that it's in the middle of a short-duration failover, that's means it's still a Big Deal.

One solution to both of these problems is for the pool to provide hooks into its behavior: points where it calls out to user code to get the information it needs. For example, rather than read a static configuration variable for username and password, it would call a user-defined function for these values.

And that's fine, but it made me realize something: the real problem is that current connection pools try to do two things. The first is managing the pool: creating connections, holding them until needed, handing them out, collecting them again when the application code forgets to close them, and trying to do all of this with the minimal amount of overhead. The second task is establishing a connection, which has subtle variations depending on the database in use.

I think that the next evolution in connection pools is to separate those behaviors, and turn the connection management code into a composable stack of behaviors that can be mixed and matched as needed. One person might need a MySQL connection factory that uses an IAM credentials provider and a post-connection check that throws if the database is read-only. Another person might want a Postgres connection factory that uses an environment-based credentials provider and is fine with a basic connection check.

The tricky part is deciding on the right interfaces. I don't know that the three-part stack that I just described is the right way to go. I'm pretty sure that overriding javax.sql.DataSource isn't. And after creating an API, there's the hard work of implementing it, although I think that you'll find a lot of people willing to contribute small pieces.

The real question: is there a connection pool maintainer out there who thinks the same way and is willing to break her pool into pieces?


Simeon Malchev said...

I think that the feature that you're looking for to separate the connection creation logic from the connection pooling logic as well as the related programming hooks logic already exists in Vibur DBCP.

Vibur DBCP supports configuration of the JDBC Driver through its "driver" configuration parameter. The jdbc.sql.Driver is a very simple interface that controls the connection creation - the credentials management can be done there, completely externalized and completely separated from the connection-pool. There is also a "driverProperties" property that can be used, if needed.

Otherwise, Vibur DBCP heavily relies on programming hooks for observing and monitoring the pool and connections events and behavior. The post-connection hook that can check and throw if the database is read-only particularly exists and can be configured via viburDS.getConnHooks().addOnInit(...). The documentation of all Vibur hooks is available here: .

Disclosure: I'm the author of Vibur DBCP.

Camille Huot said...

Could you provide more examples of what you describe as "composable stack of behaviors"? I got your point regards to handling the credentials and the failover, for which we can definitively provide hooks for the user, but I don't see a lot more use cases.

Thanks for sharing your thoughts

My team provides in-house connection pools, both as a client library and as a standalone proxy.

kdgregory said...

One might be to add the identity hashcode of the returned connection to the mapped diagnostic context of your logging framework, so that you can identify where your program is incorrectly using the same connection across threads.

As I've been thinking about this some more, I would like to see the connection management code working like servlet filters: a chain of objects that each have the opportunity to affect either the properties used to create the connection, or the raw connection that's returned from the database.