A common deployment structure for Amazon Virtual Private Clouds (VPCs) is to separate your servers into public and private subnets. For example, you put your webservers into the public subnet, and database servers in the private subnet. Or for more security you put all of your servers in the private subnet, with an Elastic Load Balancer (ELB) in the public subnet as the only point-of-contact with the open Internet.
The problem with this second architecture is that you have no way to get to those servers for troubleshooting: the definition of a private subnet is that it does not expose servers to the Internet.*
The standard solution involves a “bastion” host: a separate EC2 instance that runs on the public subnet and exposes a limited number of ports to the outside world. For a Linux-centric distribution, it might expose port 22 (SSH), usually restricted to a limited number of source IP addresses. In order to access a host on the private network, you first connect to the bastion host and then from there connect to the private host (although there's a neat trick with netcat that lets you connect via the bastion without an explicit login).
The problem with a bastion host — or, for Windows users, an
RD Gateway
— is that it costs money. Not much, to be sure: ssh forwarding doesn't require much in
the way of resources, so a t2.nano
instance is sufficient. But still …
It turns out that you've already got a bastion host in your public subnet: the ELB. You might think of your ELB as just front-end for your webservers: it accepts requests and forwards them to one of a fleet of servers. If you get fancy, maybe you enable session stickiness, or do HTTPS termination at the load balancer. But what you may not realize is that an ELB can forward any TCP port.**
So, let's say that you're running some Windows servers in the private subnet. To expose them to the Internet, go into your ELB config and forward traffic from port 3389:
Of course, you don't really want to expose those servers to the Internet, you want to expose them to your office network. That's controlled by the security group that's attached to the ELB; add an inbound rule that just allows access from your home/office network (yeah, I'm not showing my real IP here):
Lastly, if you use an explicit security group to control traffic from the ELB to the servers, you'll also need to open the port on it. Personally, I like the idea of a “default” security group that allows all components of an application within the VPC to talk with each other.
You should now be able to fire up your favorite rdesktop client and connect to a server.
> xfreerdp --plugin cliprdr -u Administrator 52.15.40.131 loading plugin cliprdr connected to 52.15.40.131:3389 Password: ...
The big drawback, of course, is that you have to control over which server you connect to. But for many troubleshooting tasks, that doesn't matter: any server in the load balancer's list will show the same behavior. And in development, where you often have only one server, this technique lets you avoid creating special configuration that won't run in production.
* Actually, the definition of a public subnet is that it routes non-VPC traffic to an Internet Gateway, which is a precondition for exposing servers to the Internet. However, this isn't a sufficient condition: even if you have an Internet Gateway you can prevent access to a host by not giving it a public IP. But such pedantic distinctions are not really relevant to the point of this post; for practical purposes, a private subnet doesn't allow any access from the Internet to its hosts, while a public subnet might.
** I should clarify: the Classic Load Balancer can forward any port; an Application load balancer just handles HTTP and HTTPS, but has highly configurable routing. See the docs for more details.
No comments:
Post a Comment