This is page 2 of:

PCI 2.0: Major Step Forward, If You Value Vagueness

September 30th, 2010

Unfortunately, the Council opted to not get specific about where retailers should look for guidance in determining such matters. And again, this choice invites QSA quarrels. PCI’s instruction is that “risk rankings should be based on industry best practices. For example, criteria for ranking ‘High’ risk vulnerabilities may include a CVSS base score of 4.0 or above, and/or a vendor-supplied patch classified by the vendor as critical and/or a vulnerability affecting a critical system component.”

Coding quality is another area of change toward ambiguity. The current wording encourages applications to be created based on secure coding guidelines. Hard to argue with that. But the new version makes it explicit that this rule “applies to all custom-developed application types in scope, rather than only Web applications.” No problem.

The current version wants developers to rely on the not-for-profit Open Web Application Security Project (OWASP), while the new version gives retailers many more choices. “As industry best practices for vulnerability management are updated (for example, the OWASP Guide, SANS CWE Top 25, CERT Secure Coding, etc.), the current best practices must be used for these requirements.”

By listing three sources—and also by opening the door to an unlimited number of others by saying “etc.”—conflicting interpretations are inevitable. It might not even have to get to a differing interpretation level. What if one QSA prefers a different group’s guidelines than another QSA? By mandating its use (“current best practices must be used”) and then giving lots of options, the clarity being promised may morph into chaos.

Yet another example: PCI 2.0 speaks of password management requirements for non-consumer users, but it never defines non-consumer users.

In 8.5.9, the new rule says: “For a sample of system components, obtain and inspect system configuration settings to verify that user password parameters are set to require users to change passwords at least every 90 days. For service providers only, review internal processes and customer/user documentation to verify that non-consumer user passwords are required to change periodically and that non-consumer users are given guidance as to when, and under what circumstances, passwords must change.”

“They are creating a special qualification for this, but it’s not clear what a non-consumer is,” Ipswitch’s Lampe said. “It could be a cashier or a member of the IT department. No way to tell.”

Blake Huebner, the director of information security at BHI SecureConnect, took objection to changes within the wireless scanning area. The new rule (11.1) permits quarterly testing for wireless access points to use additional methods. That rule adds “flexibility that methods used may include wireless network scans, physical site inspections, network access control (NAC) or wireless IDS/IPS.”

Huebner thinks allowing physical site inspections is an unwelcome change. “They regressed on wireless scanning,” he said, adding that physical inspections rarely add much and may distract IT staff from more effective methods for detecting rogue wireless issues.


One Comment | Read PCI 2.0: Major Step Forward, If You Value Vagueness

  1. Dave Gianna, MS, MBA, CISSP, PCI-QSA, PA-QSA Says:

    The move to a risk-based approach to PCI-DSS rather than a compliance-based approach would enable the transformation of PCI-DSS from a compliance standard to a security standard. On the other hand, the PCI-SSC avoids conflict with other industry security standards, guidance and recommended best practices by NOT trying to be a security standard. That much is true up to now.

    For example, in the case of annual key rotation requirements in section 3.6, there is little ambiguity in v1.2.1 where an annual key rotation is required. If an entity feels that quarterly or monthly encryption key rotations are appropriate, then they have met and exceeded that standard. If the cryptographic cycle is longer than one year, then that organization has not met the standard.

    In most cases, the real risk of compromise to other wise protected data lies not in the age of the encryption key, but rather lies in the balance of section 3.6 requirements that are NOT addressed by key rotation – specifically the key generation, storage, distribution and revocation requirements. Generate weak encryption keys, implement weak cipher strength, get sloppy with storage and distribution and allow anyone to arbitrarily change keys at whim – and it matters little the frequency at which you rotate keys, even if changed daily.

    But is the QSA empowered to make risk-based decisions regarding the suitability of a client’s key rotation interval? Version 2.0 suggests this – conceivably a QSA may reject the practice of annual key rotation on the grounds that risk factors suggest a more frequent key rotation schedule, perhaps quarterly. If the organization does not meet the schedule mandated by the QSA, then that organization may fail the audit until such time that it meets the “QSA requirement.” The target organization may in turn argue that it has met the standard of PCI-DSS by implementing (and demonstrating) annual key rotation. They will cite “contempt of QSA” as the reason for their failure to comply with PCI-DSS. Who will referee this dispute? What recourse does either the target organization or the QSA have available to them, and what actions might they take to address the conflict?

    Similarly, a QSA may determine that an interval of less than one-year is appropriate – and the cryptographic cycle should be five years. How has this met the standard? By the arbitrary judgement of the same QSA who insisted on quarterly key rotations for that (other) organization? If the QSA can demonstrate the risk-based approach used to determine the cryptographic cycle, would this be acceptable? How can this be demonstrated with a high degree of confidence? (i.e.: how do we know that the QSA did not fudge some “voodoo” numbers to support their “risk-based” analysis) How will this be communicated to the Acquirers and/or Card Brands?
    The only vehicle currently available to communicate deviations from the standard is the Compensating Control worksheet. In its current form, one can well imagine what such a worksheet would look like. But one would be hard-pressed to imagine that such a Compensating Control would convey a strong and convincing argument supporting a lengthy cryptographic cycle well below the minimums established in the PCI-DSS.


StorefrontBacktalk delivers the latest retail technology news & analysis. Join more than 60,000 retail IT leaders who subscribe to our free weekly email. Sign up today!

Most Recent Comments

Why Did Gonzales Hackers Like European Cards So Much Better?

I am still unclear about the core point here-- why higher value of European cards. Supply and demand, yes, makes sense. But the fact that the cards were chip and pin (EMV) should make them less valuable because that demonstrably reduces the ability to use them fraudulently. Did the author mean that the chip and pin cards could be used in a country where EMV is not implemented--the US--and this mis-match make it easier to us them since the issuing banks may not have as robust anti-fraud controls as non-EMV banks because they assumed EMV would do the fraud prevention for them Read more...
Two possible reasons that I can think of and have seen in the past - 1) Cards issued by European banks when used online cross border don't usually support AVS checks. So, when a European card is used with a billing address that's in the US, an ecom merchant wouldn't necessarily know that the shipping zip code doesn't match the billing code. 2) Also, in offline chip countries the card determines whether or not a transaction is approved, not the issuer. In my experience, European issuers haven't developed the same checks on authorization requests as US issuers. So, these cards might be more valuable because they are more likely to get approved. Read more...
A smart card slot in terminals doesn't mean there is a reader or that the reader is activated. Then, activated reader or not, the U.S. processors don't have apps certified or ready to load into those terminals to accept and process smart card transactions just yet. Don't get your card(t) before the terminal (horse). Read more...
The marketplace does speak. More fraud capacity translates to higher value for the stolen data. Because nearly 100% of all US transactions are authorized online in real time, we have less fraud regardless of whether the card is Magstripe only or chip and PIn. Hence, $10 prices for US cards vs $25 for the European counterparts. Read more...
@David True. The European cards have both an EMV chip AND a mag stripe. Europeans may generally use the chip for their transactions, but the insecure stripe remains vulnerable to skimming, whether it be from a false front on an ATM or a dishonest waiter with a handheld skimmer. If their stripe is skimmed, the track data can still be cloned and used fraudulently in the United States. If European banks only detect fraud from 9-5 GMT, that might explain why American criminals prefer them over American bank issued cards, who have fraud detection in place 24x7. Read more...

Our apologies. Due to legal and security copyright issues, we can't facilitate the printing of Premium Content. If you absolutely need a hard copy, please contact customer service.