Effective Engineering in the Cloud: The Web Tier

Welcome back to the second in a series of articles about how we deploy on AWS and how it has helped us evolve and grow our business with a focus on being responsive to our customers’ needs, rather than trying to spend the least amount on our product hosting. In my previous article, I provided a high-level view of the technology we use. In this post, I’ll describe our Web Tier and how AWS has helped us, as well as a few lessons that we have learned.

For our web tier, we deploy several EC2 instances behind an ELB and use AutoScale to ensure that we always have sufficient server capacity to meet our needs. This is by far our most “cloudy” tier. As a backup company, we use a ton of resources for our backup and search roles, but our application is largely a set-and-forget tool, so we do not have a lot of web-based traffic.  Consequently, we run a statically-sized AutoScale group.

The AutoScale group is deployed over multiple AZs, but all within a single region. While a single-region deployment adds a small amount of risk of a full region becoming unavailable, we simply have not prioritized the work to deploy and load balance over multiple regions since our experience is that such an outage is rare and short-lived, so spending effort here is not a good return on our effort. Typically when a full region in AWS becomes unavailable, people spend more time worrying about Netflix, Pinterest, or Reddit not being available rather than trying to log into our app to do a restore or make a configuration change.

We pass SSL through the ELB to our app. This is a pretty standard security requirement in order to ensure that any cookies we set have the Secure flag set. AWS makes managing our ELB and SSL configuration painless. Using AWS’s ELB, we do not need to manage the scalability or reliability of our own load balancer. This is a central theme that I’ll touch on over and over: let someone else handle the commodity configuration and management of your infrastructure so that you can pay attention to building features that delight your customers. The only downside to using managed infrastructure components is that you do not have direct access to all possible settings. This is usually a Good Thing (™) as it forces you to keep things as simple and resilient as possible and focus on your architectural approach instead of over-optimizing any individual component.

All of the EC2 instances that we launch use the same base image which has required OS and third party software pre-installed. We update our base image once a month to stay current with security patches. We want our instances to launch in well under 5 minutes, and pulling software from external resources puts that at risk. Even worse would be the inability to launch more instances because something required is down. We use Chef to deploy our code and configure these instances by simply passing a user-data script into the instance which tells the Chef client what role to configure.

So, there you have it.  A secure, scalable, and resilient web application running in AWS using commodity building blocks. In the next few posts, I’ll talk about our DB and Worker tiers, which is where most of our engineering effort is focused. It is in these other tiers that we take advantage of the elastic nature of AWS’s offerings.


[class^="wpforms-"]
[class^="wpforms-"]
[class^="wpforms-"]
[class^="wpforms-"]