Sunday, March 5, 2017

MySQL: The problem with synthetic primary keys

One of the standard practices in database design is to use a “synthetic” primary key — often an auto-increment integer — rather than a “natural” key relevant to the business domain. This is a Good Thing: natural keys are often not unique or ever-present, even when you think they are. Social security number is a perfect example: aside from the fact that you're not legally allowed to use it for non-tax-related identification, not everybody has one, and some people (either maliciously or accidentally) give you the wrong one.

So this post is not against synthetic primary keys per se. Rather, it's about the performance penalty that you'll pay if you create a single-column auto-increment primary key for every table in your database.

As an example, I'm going to use the USERS and EVENTS tables from my previous post. The former has 100,000 rows, the latter 10,000,000. As before, only 25% of events have associated users, so it's not a strict parent-child relationship, but we'll get to that.

create table USERS
(
    ID          integer not null auto_increment,
    CREATED_AT  timestamp not null default CURRENT_TIMESTAMP,
    VALUE       varchar(255),

    primary key (ID),
    key         (CREATED_AT)
)

create table EVENTS
(
    ID          integer not null auto_increment,
    CREATED_AT  timestamp not null default CURRENT_TIMESTAMP,
    TYPE        smallint not null,
    USER_ID     integer null,
    VALUE       varchar(255),

    primary key (ID),
    key         EVENTS_USER_IDX (USER_ID)
)

Both of these tables have synthetic primary keys, an auto-increment ID column. The EVENTS table has a foreign key relationship to USERS via USER_ID, and a secondary index to (we hope!) improve join times. Here's a typical query, with its execution plan:

select  count(*)
from    USERS U
join    EVENTS E on E.USER_ID = U.ID
where   U.CREATED_AT > subdate(CURRENT_TIMESTAMP(), 7)
and     E.TYPE = 7;
+----+-------------+-------+------------+-------+--------------------+-----------------+---------+--------------+------+----------+--------------------------+
| id | select_type | table | partitions | type  | possible_keys      | key             | key_len | ref          | rows | filtered | Extra                    |
+----+-------------+-------+------------+-------+--------------------+-----------------+---------+--------------+------+----------+--------------------------+
|  1 | SIMPLE      | U     | NULL       | range | PRIMARY,CREATED_AT | CREATED_AT      | 4       | NULL         | 3891 |   100.00 | Using where; Using index |
|  1 | SIMPLE      | E     | NULL       | ref   | EVENTS_USER_IDX    | EVENTS_USER_IDX | 5       | example.U.ID |   98 |    10.00 | Using where              |
+----+-------------+-------+------------+-------+--------------------+-----------------+---------+--------------+------+----------+--------------------------+

That looks perfectly reasonable: the query selects rows from USERS using the index on CREATED_AT, and then joins to EVENTS using the index on USER_ID. With my test database it returns slightly under 10,000 rows.

But why does it take 12 seconds the first time I run it, and only 1 second for subsequent runs?

The problem is that MySQL manages primary keys as a clustered index: it orders the physical rows in the table according to the primary key value. What this means for the join between USERS and EVENTS is that the events for a given user will be sprinkled all over the disk: each new row will be assigned a primary key value that's physically located at the end of the current data. As a result, if you select all of the rows for a particular user, you may have to access a large number of disk blocks. And to access a block you have to read it from disk.

That's not so bad if you have enough RAM to cache your entire database — which is why my subsequent queries ran so quickly. But if you don't, or queries against the EVENTS table are infrequent, you'll be hitting the disk constantly. Worse, you'll be evicting pages from the disk cache, so all your other queries will slow down because they also have to read data from the disk.

So what's the alternative?

create table NEW_EVENTS
(
  ID          int         not null auto_increment,
  CREATED_AT  timestamp   not null default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP,
  TYPE        smallint    not null,
  USER_ID     int         not null,
  VALUE       varchar(255),

  PRIMARY KEY (USER_ID, ID),
  UNIQUE KEY  NEW_EVENTS_ID_KEY(ID)
);

With this new table, my example query has the same execution plan. But the primary key on the new table organizes its data such that rows for the same user are collocated: assuming that your conditions don't select all users, it should read fewer blocks. On my laptop — after restarting MySQL and flushing the disk cache — it took just under half a second for the first query, and 0.04 seconds for subsequent queries.

So clearly the theory is borne out in practice, but can we gain more insight into what's happening? The answer is yes, provided that you have the performance schema enabled for your database (and you really should, even in production!). You can retrieve information about the number of disk operations and the time they take by querying TABLE_IO_WAITS_SUMMARY_BY_INDEX_USAGE:

select  OBJECT_SCHEMA, OBJECT_NAME, INDEX_NAME,
      COUNT_READ, SUM_TIMER_READ, 
      COUNT_WRITE, SUM_TIMER_WRITE,
      COUNT_FETCH, SUM_TIMER_FETCH
from    performance_schema.table_io_waits_summary_by_index_usage
where   OBJECT_NAME in ('USERS', 'EVENTS');

Like other tables in the performance schema, the counters are not tied to a query; you need to ensure that nothing else is happening and truncate the table between your test queries. Arguably better is to restart your server and flush the disk caches, as this will show you the worst-case timings.

Here are the stats for the first query:

+---------------+-------------+-----------------+------------+----------------+-------------+-----------------+-------------+-----------------+
| OBJECT_SCHEMA | OBJECT_NAME | INDEX_NAME      | COUNT_READ | SUM_TIMER_READ | COUNT_WRITE | SUM_TIMER_WRITE | COUNT_FETCH | SUM_TIMER_FETCH |
+---------------+-------------+-----------------+------------+----------------+-------------+-----------------+-------------+-----------------+
| example       | EVENTS      | PRIMARY         |          0 |              0 |           0 |               0 |           0 |               0 |
| example       | EVENTS      | EVENTS_USER_IDX |      97669 | 12350283888004 |           0 |               0 |       97669 |  12350283888004 |
| example       | USERS       | PRIMARY         |          0 |              0 |           0 |               0 |           0 |               0 |
| example       | USERS       | CREATED_AT      |       3892 |     5101149804 |           0 |               0 |        3892 |      5101149804 |
+---------------+-------------+-----------------+------------+----------------+-------------+-----------------+-------------+-----------------+

And for the second:

+---------------+-------------+-------------------+------------+----------------+-------------+-----------------+-------------+-----------------+
| OBJECT_SCHEMA | OBJECT_NAME | INDEX_NAME        | COUNT_READ | SUM_TIMER_READ | COUNT_WRITE | SUM_TIMER_WRITE | COUNT_FETCH | SUM_TIMER_FETCH |
+---------------+-------------+-------------------+------------+----------------+-------------+-----------------+-------------+-----------------+
| example       | NEW_EVENTS  | PRIMARY           |      97669 |   455075854140 |           0 |               0 |       97669 |    455075854140 |
| example       | NEW_EVENTS  | NEW_EVENTS_ID_KEY |          0 |              0 |           0 |               0 |           0 |               0 |
| example       | USERS       | PRIMARY           |          0 |              0 |           0 |               0 |           0 |               0 |
| example       | USERS       | CREATED_AT        |       3892 |     2637773460 |           0 |               0 |        3892 |      2637773460 |
+---------------+-------------+-------------------+------------+----------------+-------------+-----------------+-------------+-----------------+

The number of reads are identical, but look at the difference in the read timer. This value is measured in trillionths of a second, so after adding a decimal point and dropping digits from the end we see that accesses to the EVENTS table took 12.350 seconds for the first query and 0.455 seconds for the second, with the same number of reads in each. The difference is that, to find all of the relevant events for a given user the first query had to read (and wait for) far more blocks to come from the disk.

You will also notice that the time taken to access the USERS table via the CREATED_AT index has decreased as well. This can't be explained by data clustering — indeed, in creating the test data for this post, I ensured that users would not be clustered by creation date. However, I believe that these numbers show the overall benefit of reducing disk operations: by reading fewer blocks in the NEW_EVENTS table, there's a higher likelihood that blocks from USERS will remain in the disk cache.

There are, of course, tradeoffs to reindexing your tables. The most obvious one is that EVENTS.USER_ID can no longer be nullable, because it's part of the primary key. This will cause some pain when dealing with anonymous users: you must insert an explicit value rather than leave the column NULL (I will note that some authorities, including C. J. Date, will say you shouldn't use nulls anyway, so this isn't necessarily a hardship).

There's also a price to pay with inserts: in order to keep the rows ordered by primary key, MySQL may need to split blocks on insert, in order to make room for the new rows. I don't think this is a major issue: for active users, you'll be inserting new blocks into the table rather than splitting existing blocks, and for inactive users you won't often pay the price of a split.

And finally, beware that this technique this isn't appropriate everywhere. It works best with large tables, where physical row data has to be read to perform a query (versus a covering index). As with all performance optimizations, there's no substitute for actual measurement.

Sunday, February 26, 2017

Warming up RDS Snapshots

I learned something last week: when you start an RDS database instance from a snapshot, the instance's volume isn't copied all at once. Instead, it's lazily paged in from the snapshot, which is stored in S3. I suppose that shouldn't surprise me; after all, it's documented behavior (albeit for EC2, not RDS).

What did surprise me was just how much of a performance hit it causes. For example, a table scan of one of our large, rarely accessed (so not in cache) tables took about 10 minutes on an instance that had been running for several months. The same query, on a new but otherwise identical replica took nearly an hour.

Read replicas are one of the places where this behavior is particularly painful: one of the primary reasons for using a replica is to move long-running queries away from your primary machine. But with lazy-loading, for the first few hours (or days!) of a replica's life, it's significantly slower than running against the master. Worse, if you ever need to restore from a snapshot: your whole system will run slower while the blocks are loaded.

So what can you do?

You need to “warm up” the replica, by reading data from it before you put it into active use. And there are several ways to do this: one is to write queries that touch every table, preferably with a table-scan. An alternative is to take a backup of the database, and immediately throw it away:

mysqldump --all-databases --host HOST --user USER --password > /dev/null

I'm not convinced this is sufficient: it will touch all of the data blocks, but may leave the index blocks behind. I welcome any better approaches.

Saturday, February 11, 2017

MySQL: when NOT IN is not equal to NOT EXISTS

When you want to perform a difference operation between two tables, you have a choice: NOT EXISTS with a correlated subquery, or NOT IN. The latter is arguably simpler to write and makes the intent of the query more obvious. And modern database systems will optimize the two queries into similar execution plans, handling the correlation between outer and inner queries (I say “modern” because when I was working with Oracle 7.3 in the mid-90s I learned the hard way that it did not).

There is one key difference between the two constructs: if the subquery returns a NULL in its results then the NOT IN condition will fail, because null is neither equal-to nor not-equal-to any other value. But if you guard against that, they should be equivalent — indeed, some sources will tell you that NOT IN is faster and therefore preferred.

This post is about one case where it's dramatically slower, and nulls are to blame.

Consider the following two tables, which might be used to track clickstream data. Since we track both anonymous and registered users, EVENTS.USER_ID is nullable. However, when the user is not null, the secondary index has high cardinality.

create table USERS
(
  ID    integer auto_increment primary key,
  ...
)

create table EVENTS
(
  ID      integer auto_increment primary key,
  TYPE    smallint not null,
  USER_ID integer
  ...
)

create index EVENTS_USER_IDX on EVENTS(USER_ID);

OK, now let's use these tables: from a small set of users, we want to find the ones that haven't had a particular event. Using a NOT IN, and ensuring that nulls don't appear in the inner results, the query looks like this:

select  ID
from    USERS
where   ID in (1, 7, 2431, 87142, 32768)
and     ID not in
        (
        select  USER_ID
        from    EVENTS
        where   TYPE = 7
        and     USER_ID is not null
        );

For my test dataset, the USERS table has 100,000 rows and the EVENTS table has 10,000,000, of which approximately 75% have a null USER_ID. I'm running on my laptop, which has a Core i7 processor, 12 GB of RAM, and an SSD.

And I consistently get runtimes of around 2 minutes, which is … wow.

Let's replace that NOT IN with a NOT EXISTS and correlated subuery:

select  ID
from    USERS
where   ID in (1, 7, 2431, 87142, 32768)
and     not exists
        (
        select  1
        from    EVENTS
        where   USER_ID = USERS.ID
        and     TYPE = 7
        );

This version runs in 0.01 seconds, which is more what I expected.

Time to compare execution plans. The first plan is from the NOT IN query, the second is from the NOT EXISTS.

+----+--------------------+--------+------------+----------------+-----------------+-----------------+---------+------+------+----------+--------------------------+
| id | select_type        | table  | partitions | type           | possible_keys   | key             | key_len | ref  | rows | filtered | Extra                    |
+----+--------------------+--------+------------+----------------+-----------------+-----------------+---------+------+------+----------+--------------------------+
|  1 | PRIMARY            | USERS  | NULL       | range          | PRIMARY         | PRIMARY         | 4       | NULL |    5 |   100.00 | Using where; Using index |
|  2 | DEPENDENT SUBQUERY | EVENTS | NULL       | index_subquery | EVENTS_USER_IDX | EVENTS_USER_IDX | 5       | func |  195 |    10.00 | Using where              |
+----+--------------------+--------+------------+----------------+-----------------+-----------------+---------+------+------+----------+--------------------------+
+----+--------------------+--------+------------+-------+-----------------+-----------------+---------+------------------+------+----------+--------------------------+
| id | select_type        | table  | partitions | type  | possible_keys   | key             | key_len | ref              | rows | filtered | Extra                    |
+----+--------------------+--------+------------+-------+-----------------+-----------------+---------+------------------+------+----------+--------------------------+
|  1 | PRIMARY            | USERS  | NULL       | range | PRIMARY         | PRIMARY         | 4       | NULL             |    5 |   100.00 | Using where; Using index |
|  2 | DEPENDENT SUBQUERY | EVENTS | NULL       | ref   | EVENTS_USER_IDX | EVENTS_USER_IDX | 5       | example.USERS.ID |   97 |    10.00 | Using where              |
+----+--------------------+--------+------------+-------+-----------------+-----------------+---------+------------------+------+----------+--------------------------+

Almost identical: both select rows from the USERS table, and then use a nested-loops join (“dependent subquery&rquo;) to retrieve rows from the EVENTS table. Both claim to use EVENTS_USER_IDX to select rows in the subquery. And they estimate similar numbers of rows at each step.

But look more closely at the join types. The NOT IN version uses index_subquery, while the NOT EXISTS version uses ref. Also look at the ref column: the NOT EXISTS version uses an explicit reference to the outer column, while the NOT IN uses a function. What's going on here?

The index_subquery join type indicates that MySQL will scan the index to find relevant rows for the subquery. Could that be the problem? I don't think so, because the EVENTS_USER_IDX is “narrow”: it only has one column, so the engine should not have to read a lot of blocks to find rows corresponding to the IDs from the outer query (indeed, I've tried a variety of queries to exercise this index, and all run in a few hundredths of a second).

For more information, I turned to the “extended” execution plan. To see this plan, prefix the query with explain extended, and follow it with show warnings. Here's what you get from the NOT IN query (reformatted for clarity):

/* select#1 */  select `example`.`USERS`.`ID` AS `ID` 
                from    `example`.`USERS` 
                where   ((`example`.`USERS`.`ID` in (1,7,2431,87142,32768)) 
                        and (not((`example`.`USERS`.`ID`,
                            (((`example`.`USERS`.`ID`) in EVENTS 
                            on      EVENTS_USER_IDX checking NULL
                            where   ((`example`.`EVENTS`.`TYPE` = 7) 
                                    and (`example`.`EVENTS`.`USER_ID` is not null))
                            having (`example`.`EVENTS`.`USER_ID`)))))))

I haven't been able to find an explanation of “on EVENTS_USER_IDX checking NULL” but here's what I think is happening: the optimizer believes that it is executing an IN query that can have NULL in the results; it does not consider the null check in the where clause when making this decision. As a result, it will examine the 7.5 million rows where USER_ID is null, along with the few dozen rows where it matches the values from the outer query. And by “examine,” I mean that it will read the table row and only then apply the is not null condition. Moreover, based on the time taken to run the query, I think it's doing this for every one of the candidate values in the outer query.

So, bottom line: whenever you're thinking of using an IN or NOT IN subquery on a nullable column, rethink and use EXISTS or NOT EXISTS instead.

Friday, January 27, 2017

Trusting the Internet: Picking Third-Party Libraries

Many applications today are like the human body:* a relatively small proportion of “in-house” code, leveraged by dozens if not hundreds of third-party libraries — everything from object-relational mappers to a single function that left-pads a string. And that leads to a conundrum: how do you pick the libraries that you include in your project? Or in other words, is it OK to download something from the Internet and make it a fundamental part of your business?

Sometimes, of course, you don't have a choice. If you use the JUnit testing framework, for example, you are going to get the Hamcrest library along with it (and maybe you'll feel some concern that the hamcrest.org domain is no longer registered). But what criteria did you use to pick JUnit?

I was recently faced with that very question, in looking for a library to parse and validate JSON Web Tokens in Java. There are several libraries to choose from; these are the criteria that I used to pick one, from most important to least.

  1. Following the crowd
    If 100,000 projects use a particular library without issue, chances are good that you can too. But how do you know how many projects use a library? For JavaScript projects, npm gives you numbers of downloads; ditto for Ruby projects and the gems they use. Java projects don't have it so easy: while Maven Central does keep statistics on downloads (they're available to package maintainers), that information isn't available to consumers (other than a listing of the top 10 downloads).

    One interesting technique for following the crowd is looking in your local repository, to see if the package is already there as a dependency of another library. If you can create a dependency tree for your project, look at where the candidate lives in the tree: is it close to the root or deep in the weeds? Is it included by multiple other libraries or just one? These are all signals of how the rest of the world views the library.

  2. Documentation
    I believe that care in documentation is a good proxy for care in implementation. Things I want to see are complete JavaDoc and examples. For projects hosted on GitHub, I should be able to understand the library based solely on the README (and there's no reason that a non-GitHub project should omit a README, although I admit to being guilty in that regard).

  3. Author Credibility
    This can be difficult, especially where the “author” is a corporation (although large corporations tend to do their own vetting before letting projects out under the corporate name). In the case of a sole maintainer, I Google the person's name and see what comes up. I'd like to see web pages that demonstrate deep knowledge of the subject (especially for security-related libraries). Even better are slides from a conference, because that implies tha the author has at least some recognition in the community.

  4. Issue Handling
    Every library has issues. Does the maintainer respond to them in a reasonable timeframe? A large number of outstanding issues should raise a red flag, as should a maintainer that responds in a non-professional manner. You wouldn't accept that from a coworker (I hope), and by using a package you make the maintainer your coworker.

Once I have decided on a candidate library (or small number of candidates) I try it out for my use case. If it looks good, it becomes part of my application. One thing that I do not do is dig into the library source code.

The promise of open-source software is that you can download the sources and inspect them. The reality is that nobody every does that — and nobody could, because it would be more than a full-time job. So we choose as best we can, and hope that there isn't a dependency-of-a-dependency-of-a-dependency that's going to hurt us.


* The reference is to the amount of bacteria and other organisms that don't share your DNA but live on or in your body. You'll often find a 10:1 (bacteria:human) ratio quoted, but see this article for commentary on the history and validity of that ratio (tl;dr, it's more like 60:40).