This is page 2 of:
Going Out On A Limb With Out Of Scope
Some people reading the FAQ will be disappointed and regret that the Council did not go far enough or say that it opened up more questions than it answered. I prefer to give the Council and its Technical Working Group credit for taking on the subject. The Council said at its September 2009 Community Meeting that its evaluation of new technologies was in the early stages and that nothing definitive would come out soon. Also keep in mind that every word and punctuation mark in the FAQ had to be blessed by each of the five payment brands. Speaking as someone who worked for one of the brands, this is no easy task.
Where does the Council’s position on encryption leave IT execs who are wondering what data is in scope versus out of scope versus what I referred to as temporarily out of scope, and what should they do to protect each type of data?
Let me say first that I am a fan of both point-to-point encryption and tokenization when they are properly implemented. Either technology–again, if properly implemented–can reduce your scope and minimize both the effort and the cost of PCI compliance. As always, the questions arise when we discuss the specific details. In-scope data (including encrypted data when you have the means to decrypt it) must be protected, per the DSS. Some out-of-scope data does not need to be protected. In this category I include properly truncated PANs that are explicitly out of scope.
What’s left is my so-called temporarily out-of-scope data, by which I mean data that, while out of scope today, may be in scope tomorrow. For example, your vendor may be hacked or fall victim to a phishing attack and encryption keys or lookup tables may be compromised. Or, a malicious insider may request that the vendor decrypt some data and send her the clear text PANs. Or, an insider with a legitimate need for the data loses the clear text data (or a storage device or laptop) or leaks it to other systems that you classified as out of scope.
Basically, if you can imagine a reasonable scenario whereby your out-of-scope data can morph to in-scope data, then it fits my definition of temporarily out of scope. If this possibility exists, for the sake of your company and your brand (and maybe your job), you need to consider protecting this temporarily out-of-scope data, per the DSS. That is, treat such data as if it is in scope.
This conclusion may not be exactly the same as the one Evan reached in his piece last week, and you may not agree with it. Either way, I’d like to know what you think. Leave a comment, or E-mail me: wconway@403labs.com.
November 19th, 2009 at 12:51 pm
I would argue that “true tokens”, meaning tokens not based on the PAN, are out-of-scope and that the PCI council or the card brands would have a difficult time bringing them in-scope. Reason being, true tokens can be generated based and any scheme: simple sequential numbers, pseudo random numbers, timestamps, or unlimited other factors. Prior to “tokenization” (and still to this day), a POS vendor could use the invoice number as a “token” with the gateway that I represent. If tokens are deemed in-scope, shouldn’t invoice numbers as in scope as well?
On the other hand, I would argue that “false tokens”, meaning tokens based on the PAN whether a hash or encryption, are in-scope because they would have the potential to be looked up via a hash table or decrypted.
In either situation (true or false tokens) the tokenization system itself would always be in-scope.
November 19th, 2009 at 8:06 pm
Thanks for your comment, Steve.
While “true tokens” as you call them can’t be unscrambled, there is generally a lookup table or some other method to get back to the original PAN. That lookup table is the source of the vulnerability, and that vulnerability increases based on your policies/procedures for who can get to it and, thereby, the clear text data.
If there is no table or similar way to get back to the clear text data, then I would agree the “tokenized” data are out of scope. But I wonder how much such “true tokenization” would reduce scope in practice. That is, it seems these “true tokens” would not be much use for exception item processing, velocity tracking, loyalty, etc., so you would still need to keep and protect a lot of PAN data.
We agree that the tokenization system would always be in scope, which is why tokenization is a great way to reduce scope, but it doesn’t make PCI go away.
November 20th, 2009 at 12:50 pm
Not all tokenization solutions are created equal and I would agree, if a lookup table is used, scope (specifically out-of-scope) becomes questionable. My initial thought is the lookup table you referenced would not be part of a “true token” solution.
I only know the inner workings of our tokenization solution and there is no way to use the token to get back the PAN — the token can be used by the merchant to process transactions, but the PAN is never returned to the requestor (and tokens can only be used within the merchant it was issued).
Now I do know of at least one solution where the token is used to retreive the associated PAN, but even in this solution I would consider this a weakness of the tokenization solution; nothing to do with the scoping question of application that use tokens.
November 20th, 2009 at 1:15 pm
You make two important points on which we agree, Steve. First, when you say “Not all tokenization solutions are created equal”, and I place emphasis on the “proper implementation” of the tokenization (or any security) product, we are saying much the same thing. Whether the product or the implementation (or both…shudder…), you need to look at the details.
Secondly, you point out the situation where “the PAN is never returned to the requestor.” So long as this is enforced, and so long as the vault/entity remains PCI compliant (and maybe stays in business, too…) and you can prove that the practice meets the policy, I may well agree with you that the tokens could be viewed as out of scope.
November 20th, 2009 at 1:47 pm
You both make good points, but …. I think the key takeaway is housed in Walt’s last comment: “I may well agree with you that the tokens could be viewed as out of scope.”
I stress “could be viewed as” and would argue that all of this–I was about to say “much of this” but it’s really almost all–hangs on what the viewer (presumably the PCI Council, the card brands, some major issuing banks, key assessors or some combo of all of the above) decides to do. And for most of the players mentioned above, the only safe route is to to be conservative and declare that anything in doubt is in-scope. You can make the compelling and legitimate case in the world, but the viewer doesn’t feel like extending the risk, it won’t go anywhere.
And from the retailer’s perspective, why should they take their own risk and treat data dubbed out-of-scope any differently? It all hangs on somebody putting faith that an out-of-scope declaration gives them carte blanche to treat data differently. I doubt many wise viewers (assessors, PCI, brands, etc. OR retail IT) would find it worthwhile to take that risk. As for the merchants, they’ve learned the hard way that safe harbour is a myth. How many chains have been declared compliant but then been reversed after a breach? And you expect these people to trust an out-of-scope declaration?