Wednesday, August 9, 2017

Managing Secrets with KMS

Update, April 2018: Amazon just introduced AWS Secrets Manager, which allows you to securely store, retrieve, and version secrets with attached metadata. It also provides direct integration with RDS (MySQL, Postgres, and Aurora only), allowing you to rotate passwords for these services without a code update. Unless you really want to track your secrets in source control, this is a better solution than what's described in this post.


Managing secrets — database passwords, webservice logins, and the like — is one of the more painful parts of software development and deployment. You don't want them to appear in plaintext, because that's a security hole waiting to be exploited. Yet you want them to be stored in your source control system, so that you can track changes. In order to use the secrets you need to decrypt them, but storing a decryption key in source control is equivalent to storing the secrets themselves in plaintext.

I've seen several ways to solve this problem, and liked none of them. On one end of the spectrum are tools like BlackBox, which try to federate user-held keys so that those users can encrypt or decrypt a file of secrets. It was the primary form of secret-sharing at one company I worked at, and I found it quite brittle, with users quietly losing the ability to decrypt files (I suspect as a result of a bad merge).

On the other end of the spectrum is something like HashiCorp Vault, which is a service that provides encrypted secret storage as one of its many capabilities. But you have to manage the Vault server(s) yourself, and you still need to manage the secret that's used to authenticate yourself (or your application) with the service.

One alternative that I do like is Amazon's Key Management Service (KMS). With KMS, you create a “master key” that is stored in Amazon's data center and never leaves. Encryption and decryption are web services, with access controlled via the Amazon Identity and Access Management service. The big benefit of this service is that you can assign the decryption role to an EC2 instance or Lambda function, so that you never need to store physical credentials. The chief drawback is that service-based encryption is limited to 4k worth of data, and you'll pay for each request; for managing configuration secrets, this shouldn't be an issue.

In this post I'm going to show two examples of using KMS. The first is simple command-line encryption and decryption, useful for exchanging secrets between coworkers over an untrusted medium like email. The second shows how KMS can be used for application configuration. To follow along you'll need to create a key, which will cost you $1 for each month or fraction thereof that the key exists, plus a negligible amount per request.

Command-line Encryption and Decryption

Security professionals may suffer angina at the very idea of sharing passwords, but there are times when it's the easiest way to accomplish a task. However, the actual process of sharing is a challenge. The most secure way is to write the password on a piece of paper (with a felt-tip pen so that you don't leave an imprint on the sheet below), physically hand that sheet to the recipient, and expect her to burn it after use. But I can't read my own writing, much less expect others to do so. And physical sharing only works when the people are colocated, otherwise the delays become annoying and you have to trust the person carrying the message.

Sending plaintext passwords over your instant messaging service is a bad idea. I trust Slack as much as anybody, but it saves everything that you send, meaning that you're one data breach away from being forced to change all of your passwords. You could use GPG to encrypt the secret, but that requires that the recipient have your public key (mine is here, but do you have faith that I will have control over that server when “I” send you a message?).

If both of you have access to the same KMS master key, however, there is a simple solution:

> aws kms encrypt --key-id alias/example --output text --query CiphertextBlob --plaintext "Hello, world!"
AQICAHhJ6Eby+GBrQVV7F+CECJDvJ9pMoXIVzuATRXZH67SbpgEIMhJrjZwJwV7Ew9xD9dhqAAAAazBpBgkqhkiG9w0BBwagXDBaAgEAMFUGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMp2GUXayB8nDensi1AgEQgCi2kSz2LdSXHw9WOONhBwA+jadJaLL6QgwbdeNMbz3EF/xwRbqOJUV+

That string is a Base64-encoded blob of ciphertext that encrypts both the plaintext and the key identifier. You can paste it into an email or instant messenger window, and the person at the other end simply needs to decode the Base64 and decrypt it.

> echo "encrypted string goes here" | base64 -d > /tmp/cipherblob

> aws kms decrypt --ciphertext-blob fileb:///tmp/cipherblob
{
    "Plaintext": "SGVsbG8sIHdvcmxkIQ==",
    "KeyId": "arn:aws:kms:us-east-1:717623742438:key/dc46c8c3-2269-49ef-befd-b244c7f364af"
}

Well, that's almost correct. I showed the complete output to highlight that while encrypt and decrypt work with binary data, AWS uses JSON as its transport container. That means that the plaintext remains Base64-encoded. To actually decrypt, you would use the following command, which extracts the Base64-encoded plaintext from the response and pipes it through the Base64 decoder:

> aws kms decrypt --ciphertext-blob fileb:///tmp/cipherblob --output text --query Plaintext | base64 -d
Hello, world!

Secrets Management

The previous example assumed the user that was allowed to use the key for both encryption and decryption. But the ability to encrypt does not imply the ability to decrypt: you control access to the key using IAM policies, and can grant encryption to one set of users (developers) and decryption to another (your applications).

Let's start with the policy that controls encryption:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kms:Encrypt"
            ],
            "Resource": [
                "arn:aws:kms:us-east-1:1234567890:key/dc46ccc3-2869-49ef-bead-b244c9f364af"
            ]
        }
    ]
}

For your own policy you would replace 1234567890 with your AWS account ID, and dc46… with the UUID of your own key. Note that you have to use a UUID rather than an alias (ie: alias/example from the command-line above); you can get this UUID from the AWS Console.

Attach this policy to the users (or better, groups) that are allowed to encrypt secrets (typically your entire developer group).

The decryption policy is almost identical; only the action has changed.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt"
            ],
            "Resource": [
                "arn:aws:kms:us-east-1:1234567890:key/dc46c8c3-2269-49ef-befd-b244c7f364af"
            ]
        }
    ]
}

However, rather than attaching this policy to a user or group, attach it to an EC2 instance role. Then, inside your application, use this code to do the decryption:

    private String decodeSecret(AWSKMS kmsClient, String secret) {
        byte[] encryptedBytes = BinaryUtils.fromBase64(secret);
        ByteBuffer encryptedBuffer = ByteBuffer.wrap(encryptedBytes);

        DecryptRequest request = new DecryptRequest().withCiphertextBlob(encryptedBuffer);
        DecryptResult response = kmsClient.decrypt(request);

        byte[] plaintextBytes = BinaryUtils.copyAllBytesFrom(response.getPlaintext());
        try {
            return new String(plaintextBytes, "UTF-8");
        }
        catch (UnsupportedEncodingException ex) {
            throw new RuntimeException("UTF-8 encoding not supported; JVM may be corrupted", ex);
        }
    }

Note that you pass in the KMS client object: like other AWS client objects, these are intended to be shared. If you use a dependency-injection framework, you can create a singleton instance and inject it where needed. As with any other AWS client, you should always use the client's default constructor (or, for newer AWS SDK releases, the default client-builder), which uses the default provider chain to find actual credentials.

This Doesn't Work!

There are a few things that can trip you up. Start debugging by verifying that you've assigned the correct permissions to the users/groups/roles that you think you have: open the user (or group, or role) in the AWS Console and click the “Access Advisor” tab. It's always worth clicking through to the policy, to ensure that you haven't accidentally assigned the encrypt policy to the decrypt user and vice-versa.

Amazon also provides a policy simulator that lets you verify explicit commands against a user/role/group. If use it, remember that you have to explicitly reference the ARN of the resource that you're testing (in this case, the key); by default the Policy Simulator uses a wildcard (“*”), which will be rejected by a well-written policy.

When making changes to policies, remember that AWS is a distributed system, so changes may take a short amount of time to propagate (the IAM FAQ uses the term “almost immediately” several times).

And lastly, be aware that KMS keys have their own policies, and that the key's own policy must grant access to the AWS account for any IAM roles to be valid. If you create your KMS key via the console it will have an appropriate default policy; this may not be the case if you create it via the SDK or command line (although the docs indicate that, even then, there's a default policy that grants access to the account, so this problem is unlikely).

But I don't want to be locked in to AWS!

I've heard several people raise this issue, about various AWS services. It's not an issue that I think much about: the last few companies that I've worked for were running 100% in AWS. We're already locked in, making use of multiple AWS services; KMS is just one more. If you're also running entirely in AWS, I think that you should embrace the services available, and not worry about lock-in; moving to another cloud provider isn't a task to be undertaken lightly even if you're just using compute services.

For those running hybrid deployments (part in the cloud part in a datacenter), or who use cloud services from multiple vendors, the concern is perhaps more relevant. If only because your operations become dependent on the network connection between you and AWS.

The cost-benefit analysis in that case becomes a little more complex. I still think that KMS — or a similar service provided by other clouds, like Azure Key Vault — is worthwhile, simply because it applies rigor to your secrets management. With a little thought to your configuration management, you should be able to keep running even if the cloud is unavailable.

No comments: