Human Error Is Still Amazon Cloud’s Achilles Heel
Written by Frank HayesThe Amazon Cloud outage on December 24—the one that knocked Netflix offline for much of Christmas Eve—was due purely to human error. And it was the dumbest sort of human error: an Amazon developer with special privileges mistakenly ran a maintenance process against the production system, wiping out critical state data—and then didn’t realize he had crippled the system until hours after it began causing problems for customers, according to the version of events Amazon released on Monday (Dec. 31).
It then took more than 12 hours (including a false start or two) for Amazon’s team to re-create the data, and several more hours to slowly get the system working again. Total outage time: possibly the longest 23 hours and 41 minutes in Amazon’s history.
According to Amazon’s own summary of the outage—beg pardon, “service event”—the problem originated in the load-balancing systems for Amazon’s cloud and only affected customers in the Eastern region of the U.S. At 12:24 PM Pacific time (3:24 PM Eastern) on December 24, “a portion of the ELB [Elastic Load Balancing System] state data was logically deleted. This data is used and maintained by the ELB control plane to manage the configuration of the ELB load balancers in the region (for example, tracking all the backend hosts to which traffic should be routed by each load balancer),” according to Amazon.
Translation: Amazon’s cloud forgot everything it knew about how to let customers do load balancing.
The data was deleted by “one of a very small number of developers who have access to this production environment,” inadvertently running the maintenance process against the production ELB state data, according to the Amazon report.
How was that possible? It turns out that most of the access controls for the cloud go through a strict change management process, which should have prevented this mistake. But Amazon is in the process of automating some cloud-maintenance processes, and a small number of developers have permission to run those processes manually. It also turns out that once those developers accessed the processes once, they didn’t have to go through an access process again—in effect, getting rid of the “Do you really want to bring the Amazon Cloud crashing down? OK/Cancel” message.
Yes, Amazon has fixed that—now everything goes through change management. Back to the timeline:
At 12:24 PM on December 24 ELB state data was deleted. “The ELB control plane began experiencing high latency and error rates for API calls to manage ELB load balancers,” according to Amazon. But the system was still handling basic load-balancing requests to create and manage new load balancers, because it didn’t need state data to do that.
Amazon’s technical teams spotted the API errors but didn’t spot the pattern that new load balancers could be managed while older (pre-12:24 PM) load balancers couldn’t be properly managed, because their configuration data was gone.
Meanwhile, some customers began to see performance problems with their cloud applications. It wasn’t until the team started digging into the specifics of those performance problems that they spotted the missing state data as the root cause of the problem.
At 5:02 PM on December 24 the Amazon team stopped the spread of the problem and began looking for a way to fix it.