How Long Is A Point-In-Time Audit Good For?
Written by David TaylorGuestView Columnist David Taylor is the Founder of the PCI Knowledge Base, Research Director of the PCI Alliance and a former E-Commerce and Security analyst with Gartner.
Heartland Payment Systems had a large security breach that apparently started in spring 2008. Around the same time, Trustwave–the largest of the QSA assessors–affirmed the company as PCI compliant. All sorts of things can be read into this finding: There was a problem with the audit; there is a problem with the PCI standards; or there’s a big difference between being PCI compliant and preventing security breaches.
In the last week, I have spoken with quite a few retailers and service providers who would like to condemn the whole process. After all, they argue, wasn’t the original purpose of spending all this money on security to prevent breaches like Heartland (or TJX or a thousand others)?
Every security professional worthy of their myriad certifications knows that security and compliance are two different animals. Security is about minimizing risk. It’s never stated as an absolute. Compliance, on the other hand, is about minimizing liability. It’s almost always stated as an absolute.
I’m deliberately making this distinction simply to point out that, even though security professionals “get this,” many business executives have been “sold” on the idea of spending money for compliance as a way to stop security breaches. So if a company that is “compliant” has a security breach, then its security and compliance managers will get frantic phone calls from business executives who want to be reassured that their spending on compliance has not been in vain. There may even be some yelling.
All PCI QSAs worthy of their certifications will tell you that their assessment is a “point-in-time” audit. After all, with 200+ controls to review, how could it be anything else? But how long is a “point in time”? And is there any way to make that point in time last longer, so that a “state of compliance” can persist for months–or at least until the next “point-in-time” review?
At least two points are worth making here. First, a PCI assessment (whether by a QSA or self assessment) takes time, often several months. It is common for some controls to have been reviewed two or three months before the final report on compliance (ROC) is submitted. And another month or two may pass before that ROC is reviewed and accepted. So it is not unreasonable to believe that by the time a ROC is finally approved, some of the more “dynamic” controls (e.g., firewall rules, system patches and configuration, identity management) may have already changed to the extent that they are no longer in compliance. And the implication is that in some cases a company may never be fully compliant.
Second, we have often commented about the “bi-polar” nature of the compliance process. Project managers run around like crazy ramping up to a PCI audit, gathering data, setting up interviews with assessors, compiling reports. And then they (and their security budgets) are exhausted once they get the “green ROC,” or they move on to the next project. As a result, we maintain that any controls that can be “automated” should be automated. Control effectiveness data should be collected on a continuous basis. That is the best currently available option for prolonging a state of compliance. The company then will be able to provide evidence that control effectiveness did not “erode” after the audit was completed.
Over its various versions, the language of the PCI security standards has evolved to use fewer “absolutes” (never, always, all, must) in the review of controls. Some remain, of course, and they will continue to be a reason why a company can easily fall out of compliance after a point-in-time audit. Another issue, less frequently discussed, is sampling. There are more than 20 mentions of sampling in the PCI assessment procedures. The goal, in each case, is to ensure that the sample is “representative” of the overall population of systems. It’s a safe bet, however, that few QSAs have sufficient information to understand the overall population of the systems they are sampling.
QSAs must be guided by their customer. But many PCI project managers and the technicians who are interviewed for the audit may not, themselves, understand enough about the population of systems to be sampled. My point here is simple: After a breach, the forensic process is likely to review the entire population of systems, not just a sample. Thus, it is easy to understand how a sampling process could miss vulnerabilities and even malware, especially if the “bad people” were smart enough to hide the malware in some “out of the way” place, such as an unallocated section of a disk drive or in temp files, as they did at Heartland. Thus, I would argue that it is possible for a completely correct audit to miss certain types of “passive” malware, such as sniffers, if the criminals are especially clever.
I’m not saying that PCI is flawed because it’s based on sampling, requires a 100 percent score to pass or takes months to complete. Nor is it because the company being audited could actually have fallen out of compliance by the time an audit is complete. I am saying that security is not perfect; nor is the audit process. Please note that I’ve avoided saying “PCI” in most cases because my comments apply to other types of audits as well.
Rather, the whole point here is to lobby for more awareness of the “variability” of the process. If there is any one change I would like to see, it is more incorporation of measures of “control effectiveness”–into the standard, the assessment process and the automation of controls reporting, and even in how people think of compliance. If the definition and metrics associated with compliance were adjusted to more accurately mirror the measurement of risk and security (both highly variable), we could more accurately set both the expectations and the budgets for IT security and compliance.
Our team at the PCI Knowledge Base has established our research agenda for 2009, working with the National Retail Federation and, potentially, a couple of other industry groups. We will be focusing more on PA-DSS and the PCI PED TDES requirements and helping organizations identify solution providers who can deliver the PCI Best Practices that our 2008 research program has identified. We will be publishing a report shortly on the 2008 PCI Best Practices research. If you want more information, just send an E-mail to David.Taylor@KnowPCI.com.