Monday, November 20, 2017

NAT Instance versus NAT Gateway

One of the typical configurations for an Amazon Web Services (AWS) Virtual Private Cloud (VPC) is dividing it into public and private subnets. Your application servers and databases run in the private part of the VPC, inaccessible from the Internet. The only things living in the public part of the cloud are your load balancers and a bastion host.

The problem with this architecture is that servers on the private subnet are truly isolated: they won't receive connections form the open Internet, but neither can they make such connections. So you won't be able to connect to an external data provider, load updates from an external source, or even connect to most of Amazon's services. To let your servers connect to the Internet (but still prevent the Internet from connecting to them) you need a NAT.

Amazon gives you not one, but two options for your NATting: NAT Gateways and NAT Instances. NAT Instances (I'll capitalize in this post) have been around since VPCs became available: they're simply EC2 instances with specially configured routing tables. NAT Gateways were introduced in October 2015; they are part of the VPC infrastructure, like the routers that let your subnets communicate with each other.

Amazon provides a comparison table to help you decide which you should pick. Reading through that table it seems like a no-brainer: pick the NAT Gateway. But in the real world the decision is a little more complex, which is no doubt why Amazon still maintains AMIs for NAT Instances. The rest of this post walks through some of the things that you should consider when configuring your network (although, if in doubt at the end, pick the NAT Gateway).

Do you even need a NAT?

If you're running a self-contained application consisting of web-servers, databases, and/or other home-grown infrastructure, you might not need a NAT. This will require some discipline on your part: for example, any non-AWS-supplied software has to be installed onto an AMI, which is then launched by your auto-scaling group(s). Same for updates: you can't patch an existing server, you must build a new AMI. But this is, arguably, how you should be deploying for the cloud anyway.

A bigger problem with this strategy is that it severely limits the AWS services that you can use, because they're also accessed via public IP addresses. So if you use SQS queues to connect worker instances, you'll need a NAT. Ditto for Kinesis, CloudWatch, and almost any other AWS service. There are two exceptions as of the time I'm writing: Amazon provides VPC endpoints for S3 and DynamoDB. These endpoints are essentially routing rules that direct traffic to those services through Amazon's network rather than over the Internet. I hope that Amazon makes more services available this way, but am not holding my breath: the S3 endpoint appeared in early 2015, but DynamoDB wasn't supported until late 2016.

How much traffic will be going through the NAT?

As I said, NAT Gateways seem to be a no-brainer choice, so that's what I did when first moving our third-party integrations into the VPC. And then at the end of the first month, our CFO asked about this new, large charge on the AWS bill. The reason, as I discovered after looking at flow logs, was that we pushed about a terabyte of data through the NAT each day (which is way too much, but that's a project for another day).

Amazon's data transfer pricing rules are, in a word, Byzantine, and NAT Gateways add another layer to the model. But the bottom line is that you'll pay 4½¢ per gigabyte for traffic through the NAT. Which doesn't seem like much, but translates to $45/terabyte; if you're pushing a terabyte a day, it adds up quickly. By comparison, if you use a NAT Instance, you'll be paying the basic bandwidth charges, just like your server had a public IP.

Beware, however, that one NAT Instance isn't enough: you'll want at least two for redundancy. And AWS will charge you for cross-AZ traffic within your VPC, so you'll probably want one per availability zone. But if you're pushing enough traffic, the cost of the NAT Instances will be less than the cost of a NAT Gateway. You can also adjust the instance size based on your traffic, although I wouldn't recommend one of the smaller T2 instances.

Are you willing to care for a pet?

If you're not familiar with the “cattle vs pets” metaphor, take a look at this slide deck. To summarize: you spend a lot of time and money caring for a family pet, but cattle barons turn their herds out to graze in the summer, round them up in the winter, and if one dies that's too bad for it. Or, as it applies to deployments, cloud applications don't get patched, they get terminated and rebuilt.

But you can't think of a NAT Instance that way. It is a critical part of your infrastructure, and if it ever goes down the machines on your private subnet(s) won't be able to talk to the Internet.

And unfortunately, there are a lot of reasons for the instance to go down.

One reason is that the underlying hardware suffers a problem. For this there is no warning, so you'll need a pager alarm to let you know when the NAT is down, and a plan for bringing it back up (typically by re-assigning an elastic IP to another instance). Along the same lines, AWS may schedule maintenance on an instance; the difference here is that you'll have days to plan or preempt the outage. And lastly, there will be times when you need to restart the NAT yourself, as a result of applying kernel-level patches.

Regardless of how the NAT goes out of service, it will take all of the open connections with it. Your applications must be written to handle network timeouts and reconnect, or to gracefully recover when shut down (in which case it's often easiest to just reboot all machines in the availability zone and let them establish new connections).

For us, the cost of running and caring for four NAT Instances remains less than the cost of the NAT Gateway. If we can bring down our traffic to third-party sites that decision will be re-evaluated.

Thursday, November 2, 2017

Ghost Traffic on AWS

We're reconfiguring some of our network endpoints at work, and yesterday, the day after Halloween, I see this traffic appear in our local access logs: - - [01/Nov/2017:15:09:43 -0400] "GET /grave/Charles-Karlson/16427428 HTTP/1.1" 404 967

Line after line. Every few seconds someone — or something — is asking about another grave.

This environment was running under Amazon's Elastic Beanstalk service, which assigns public DNS addresses based on the environment name, and I figured we probably weren't the only people to come up with the name tracker-test-two. Which is true, and leads to one of the take-aways that I'll list below. But the truth, as revealed from the ELB access logs, was stranger:

2017-11-01T19:09:43.168404Z awseb-e-f-AWSEBLoa-1FG2EMDGMS77G 0.000045 0.003369 0.00002 404 404 0 967 "GET HTTP/1.1" "Mozilla/5.0 (compatible; SemrushBot/1.2~bl; +" ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2

That's a real site: if you go to it, you'll see an image of a gravestone and information about the person buried under that stone. Useful, I'm sure, to people who are tracking their ancestry. But why was the traffic going to our web server?

The answer is DNS caching: to reduce load on the domain name system, intermediate nameservers are allowed to cache the results of a DNS lookup. That generally works pretty well, and has a 60 second timeout, which should have stopped traffic to that IP address fairly quickly (if you haven't already guessed, they also run on AWS servers, and recently reconfigured their environments). However, some programs and platforms do their own caching, and don't pay attention to the timeout — Java is particularly bad at this. So “sembot” is using its cached value for the URL, and will continue to make those requests until restarted.

There was another interesting artifact in our logs, although not quite as spooky:

2017-11-02T11:49:26.343413Z awseb-e-h-AWSEBLoa-1SXOP8PJPZS8R 0.000116 0.013641 0.000074 200 200 0 7520 "GET HTTP/1.1" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko" ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2

This traffic was appearing every few seconds in the logs for our old server group. This isn't a DNS caching issue: it's hitting an explicit IP address. And it doesn't look like a script kiddie trying to find vulnerabilities in our servers: it's always the same endpoint, always from the same source IP (which is probably somewhere in the midwest), and happens every 60 seconds. As far as I can tell, it's a homegrown health-check program.

Which means that someone is going to panic when I shut down that environment later this morning.

To wrap up, here are some take-aways that you might find useful:

  1. Someone had that IP address before you. Expect unexpected traffic, and think about how you will react if it's excessive.
  2. Don't write your own health checks. If you're using an ELB, you can configure it to run a health check and raise an alarm when the health check fails.
  3. If you feel you must write your own health check, use a hostname rather than an explicit IP. I know you think you'll have that IP forever, but you probably won't.
  4. Pick an appropriate time-to-live for your DNS entries. The Internet Police might yell at me for suggesting 60 seconds, but it makes your life a lot easier if you need to transition environments. Interestingly, Google (which hosts this blog) uses 78 seconds.
  5. If you're using Elastic Beanstalk, use URLs that contain random text (the environment ID is a good choice). Remember that it's a shared namespace.
  6. If you're writing an application that repeatedly connects to remote hosts, understand how your platform deals with DNS caching, and try to honor the server's intentions.
  7. If you run — or any site with similar content — don't change your IPs on Halloween.