Security Versus Scope: Choose One
Written by Walter ConwayA 403 Labs QSA, PCI Columnist Walt Conway has worked in payments and technology for more than 30 years, 10 of them with Visa.
Tokenization and end-to-end encryption are designed to secure information both in transit and at rest. In other words, the focus of each technology is security first. The fact that they can reduce PCI scope or make PCI compliance easier is a secondary benefit. That tokenization and end-to-end encryption vendors are starting to figure this point out is good news.
The smart ones—and the only vendors you should be talking to, by the way—will tell you that there is no silver bullet to make PCI go away, which is a victory for reality over marketing hype. But there is one thing you should know about reducing your PCI scope: It is pretty important. If retailers are going to get the full value for their investment in either or both of these technologies, they better look carefully at their implementation or they may find they will not get all that they paid for.
There have been some pretty good arguments made that tokenization and end-to-end encryption each can reduce a retailer’s PCI scope. But if these technologies are not implemented properly, you may find that you are more secure (a good thing) but haven’t reduced your PCI scope (a bad thing).
Whether you insist that your tokens be compliant with the Luhn algorithm (used to compute the check digit in the PAN) or not is not particularly relevant. Similarly, the issues of token collisions (duplication) and even format preserving encryption/tokenization (the encrypted data or tokens look like a PAN) are secondary. The main question retailers need to ask is whether they will have the ability to de-tokenize or decrypt the data and return it to plain text. If you have this ability, through whatever means, that data is in still scope for PCI and you will have lost a lot of the value of your investment.
As almost everyone should know, the PCI Council has decreed that encrypted cardholder data is in your PCI scope. The sole exception is “if, and only if, it has been validated that the entity that possesses encrypted cardholder data does not have the means to decrypt it.” The whole idea of investing in either tokenization or end-to-end encryption is to get as many of your systems and databases out of PCI scope as possible and to reduce your cost of PCI compliance.
Therefore, this one exception is important. The clear intent of the Council is to say that if you, the merchant, have access to any mechanism that allows you to go from a token back to the clear text PAN–whether you call that mechanism decryption or de-tokenization or de-anything–that tokenized data is still in scope. You gained some extra security, but you just blew the opportunity to reduce your PCI scope.
February 11th, 2010 at 1:45 pm
I agree with all your arguments and this is one of the reasons I advocate third party gateways — provided that the gateway does not allow the merchant to de-tokenize the data.
I disagree with the PCI SSC when it comes to it’s decree that encrypted data is out-of-scope if “…it has been validated that the entity that possesses encrypted cardholder data does not have the means to decrypt it.” Encrypted data is still data and can always be decrypted. What will the ruling be if and when someone gets hacked and it is determined that “encrypted data” was being siphoned off for months or years and then, all of a sudden some end-to-end encryption key or base derivative key (in the event of DUKPT) was cracked? Now all this out-of-scope encrypted data is as open to the hacker as plain text.
February 11th, 2010 at 6:42 pm
“Encrypted data is still data and can always be decrypted” is a meaningless statement. It like saying “the sun will eventually explode”.
Please explain how “Encrypted data is still data and can always be decrypted” can performed without a key ?
Propagating a myth that somehow its easy to brute force strong, proven cryptography is false and misleading. Attacking AES, RSA, ECC, IBE, FPE etc when strong keys are used is completely infeasible.
After all, surely even Tokenization services like Steve’s third party gateway rely on encryption and key management every nanosecond for its own security – from the connection to the banks to protect the live data in transit, the back end PAN database, the front end token API to device, to application user interfaces over SSL to manage things etc.
So, why not encrypt from the moment of swipe and be done with it – take everything after out of scope all the way to the processor? Using Format Preserving Encryption as mentioned in the article means the in between applications still work – easy.
Cheers,
Mark
February 11th, 2010 at 11:04 pm
I never meant to imply that brute force encryption key attack is easy (even though, technically it is, it just takes a long time). All encrypted data can be cracked, it’s just a matter of at what cost and how long. What is considered strong today may be insignificant tomorrow. DES (single DES) was thought to virtually uncrackable not so many years back. There are still DES devices being used and it has not been until recently that the rules were changed to phase it out. SSL1 was strong at one time — now it’s virtually useless. MD5 hashing was state of the art at one time, now PCI outlaws it.
I do see one flaw in my original statement. I should not have used the word cracked because that does imply brute force attack or encryption vulnerability. While these are possible attack vectors, I probably should have used a more generic statement like “compromised” via whatever means. Many times it’s easier to attack the key management components than it is to brute force attack for a password. My point is that if the key is ever compromised, all the freely available encrypted data is now vulnerable.
Yes, Steve’s data center does use encryption, but our data center is in full scope of PCI and we have various layers to keep the data in our data center, whereas PCI is saying a vendor can freely store, process, transmit or distribute encrypted CHD as long as the vendor does not have the key. Do you really think this is secure?
February 12th, 2010 at 4:14 pm
Even though the claim that any encryption can eventually be cracked is technically true, it’s also somewhat misleading. It’s also the case that absolutely nobody ever thought that DES was ever thought to be uncrackable. The only people who ever argued that this was true were people from the US government who wanted to control the export and use of encryption outside the US, and nobody really believed them.
Let’s get a rough idea of exactly how secure 112- and 128-bit keys are by estimating the level of effort needed to crack one. Let’s base this on the EFF’s DES Cracker.
The DES Cracker can test roughly 92 billion keys per second on 1,536 special-purpose chips. Given a plaintext-ciphertext pair, it can test all possible DES keys in a bit over 9 days, and you’d expect it to find the key that decrypted the ciphertext in about half that time, or about 4 and a half days.
Let’s assume that we can make a special-purpose computer that can test keys one billion times faster than the DES Cracker, or roughly 92 quintillion keys per second. This isn’t even close to realistic with today’s technology, but maybe a combination of Moore’s law, faster clock speeds and adding lots of additional chips to the computer will let us do this one day.
Even with this huge increase in power, such a hypothetical machine will still take roughly one million years to recover a single Triple-DES key, or about 100 billion years to recover a single 128-bit AES key. So unless someone finds an incredibly severe weakness in the encryption algorithms that we use today, it looks like that assuming that they’re essentially unbreakable is fairly reasonable. No other component of a security architecture even comes close to being this strong.
So for all intents and purposes, a hacker is never going to attack the cryptography. He’s always going to bypass it. That’s even Shamir’s Third Law, which he discussed in his Turing Award lecture in 2002.
But if 112- or 128-bit encryption is essentially unbeatable, key management certainly isn’t. You need to authenticate a user or application that’s requesting a key, for example, and that process will never be as strong as what you get from the actual encryption. Note that this also applies to tokenization: you need to authenticate a user or application that requests a detokenization operation, etc. So if you worry about key management (like you should), you should also worry about the corresponding issues around the way that tokienization is implemented.
So the security provided by either encryption or tokenization are essentially the same: they’re both limited by system issues that don’t really relate to the encryption or tokenization technology. People seem to either not understand that or to cheerfully ignore it.