This is page 2 of:
PCI’s New Cloud Guidance: Great Ideas, Short On Realism
That said, Brenton added, there’s no reason CSPs couldn’t release this data, with a little extra effort.
“Some API calls may be hypervisor specific or impact all VMs running on it. Log entries of this sort may need some sanitation prior to release to clients. However, many API calls will be things like ‘search storage on this specific VM looking for this specific pattern,'” Brenton said. “This is the type of info that the client absolutely must have to ensure that pattern is something good—such as searching for a known malware signature as part of an AV solution—or something bad—like pattern-matching for credit-card info. I think the litmus test is going to be ‘Can the client get that info through some other means?’ For example, an IaaS (infrastructure as a service) CSP probably would not have to hand over firewall logs, because the client is free to run a host-based firewall and log all activity. What’s different about introspection is that it leaves no forensic trail on the server itself, so the CSP is now the only source of that information.”
And it’s precisely that lack of a forensic trail that makes introspection so attractive to cyberthieves. No need to clean up the fingerprints after the crime: This crime-scene comes pre-police-proof.
Brenton directly disagreed with Conway on the big-picture argument that shared clouds are simply not acceptable for PCI compliance, especially for the largest chains. “That’s an incorrect statement and specifically why we created the guidance. Compliance in public space is possible, provided the proper controls are in play. For example, this introspection issue we are discussing. Two ways to get around it are to generate all the log info specified in the guidance” and “do not deploy introspection and implement compensating controls through other means.”
One retail security executive—working for one of the five largest U.S. chains—said the potential for problems with introspection is not trivial.
“Introspection allows a client VM to see host memory. If my VM and your VM were on the same host, I could read the host’s memory, which contains all the memory on the box, and could read your processes’ memory and scrape card numbers out of your environment. That’s the risk,” she said. “And just to be clear, I can see the host’s memory by exploiting OS or application holes, not by being granted full access.”
The problem is worse than that, though, as such issues can happen accidentally.
“Normally, that’s a technical task and most retailers wouldn’t maliciously try to do that. But are you sure every neighboring VM on your host is that honest? Worse, it could be easy or accidental. If I buy an introspection based tool, it scrapes all the host memory looking for badness. If it searches for common patterns, it could report that it found Visa card #4123456789012345 (because it’s 16 digits that start with a 4). That might be from your VM’s memory, not mine. And now I have a card number from you in my logs.”
There are ways, the retail security exec added, to limit that risk. “If introspection is allowed, it would have to be limited to tools that are PCI compliant (recording only “4123********2345″ instead of the whole number, for example.) Another option is that a compliant environment operator could run the tools and just notify you when you’re failing.”
She added that there are other ways this can be made to work in a shared cloud, assuming all players are willing to compromise a little and trust a lot. “What I can see happening is the hosting providers being able to offer DLP services via introspection, while forbidding their clients access to the APIs or from running them directly. It seems logical that a PCI DSS certified provider could be trusted to do that,” she said. “You could call it security as a service, if you really wanted to annoy people.”
The only problem: Compromise and trust are two things in impressively short supply with the PCI Council, QSAs and retail security people.
The rules about shared clouds skirt the core issue, which is that the very nature of a shared cloud raises the distinct possibility of secure access by hostile cloud neighbors. CSPs are not known for doing rigorous checks on who they sell cloud space to nor on initiating aggressive monitoring of what they’re doing.
To be fair, CSPs are generally fine at being reactive, meaning they’ll monitor and quickly shut down neighbors shown to be acting poorly. That’s after the fact, though. How would your cloud security protocols look if you made the assumption that at least one of your cloud neighbors was a professional cyberthief? How much data would you want them to be able to demand about your shared cloud? What if they are solely there for the purpose of breaking into your systems?
And what if it isn’t a traditional cyberthief looking to steal payment-card data or manipulate your payroll system or even an identify thief looking to pretend to be one of your customers? What if it’s a chain just like yours, but one that has decided to use the security of the cloud to do some cyber-snooping and competitive intelligence? The attack methods may be similar, but the profile of the attacker—the seemingly innocuous little old lady from Pasadena who moved in next door—may look comforting. In other words, even self-selected neighbors can be trouble.
The council’s cloud guidance does appear to be generally well thought-out. But it seems to focus on too much of a theoretical nirvana reality instead of the one where we all have to work.
February 14th, 2013 at 4:26 pm
Having an ill-behaved or malicious neighbor is actually far less of a risk than believed. MIT did an excellent research paper on this very topic, and found it was HARDER in a public cloud environment than in a private environment. One corollary – the FEWER neighbors you have the LESS protected you are, not the reverse. If you have 50 tenants on the same host, finding a specific tenant’s information is going to be far more difficult than if there are 3 tenants on the same host.