Shostack + Friends Blog Archive

 

A few Heartland links

Well, Mordaxus got the story, but I’ll add some links I found interesting or relevant.

StoreFront BackTalk has From The Heartland Breach To Second Guessing Service Providers [link to http://www.storefrontbacktalk.com/securityfraud/from-the-heartland-breach-to-second-guessing-service-providers/ no longer works]. Dave G at Matasano added “Heartland’s PCI certification.” [link to http://www.matasano.com/log/1395/heartland-first-thoughts/ no longer works] The Emergent Chaos time travel team already covered that angle in “Massachusetts Analyzes its Breach Reports:”

What’s exciting about this is that we’re seeing the PCI standard being tested against empirical data about its effectiveness. Admittedly, the report jumps to conclusions from a single data point, but this is new for security. The idea that we can take a set of “best practices” and subject them to a real test is new.

Rich Mogull [link to http://securosis.com/2009/01/20/heartland-payment-systems-attempts-to-hide-largest-data-breach-in-history-behind-inauguration/ no longer works] points out that:

This was also another case that was discovered by initially detecting fraud in the system that was traced back to the origin, rather than through their own internal security controls.

IDS users, vendors or advocates care to comment on why that’s happening?

8 comments on "A few Heartland links"

  • Because fraud is easier to detect than “malware that sniffed decrypted transactions on its processing platform”? Last I checked it’s pretty darn hard to write a Snort rule to detect a sniffer. Sounds like nobody is talking about how the bad guys got in and planted that malware or if it’s internal or external yet. Once I know how they got in and what security technology they deployed and how it was configured and whether they were paying attention I’ll tell you why.
    If you read the Verizon 2008 Data Breach Investigations Report they didn’t have sufficient knowledge of what they were protecting to protect it well. I bet that’s a big factor in this one (i.e. the 90% rule on page 24 of the report).

  • Because fraud is easier to detect than “malware that sniffed decrypted transactions on its processing platform”? Last I checked it’s pretty darn hard to write a Snort rule to detect a sniffer. Sounds like nobody is talking about how the bad guys got in and planted that malware or if it’s internal or external yet. Once I know how they got in and what security technology they deployed and how it was configured and whether they were paying attention I’ll tell you why.
    If you read the Verizon 2008 Data Breach Investigations Report they didn’t have sufficient knowledge of what they were protecting to protect it well. I bet that’s a big factor in this one (i.e. the 90% rule on page 24 of the report).

  • Cory Scott says:

    I find intriguing why you would focus exclusively on IDS as the defensive control against this breach. Why not anti-virus, network segmentation (section 1.3), change control, file integrity monitoring, vulnerability scanning, patch management, log review, or many of the other controls that may have prevented, detected, or limited the impact of this breach?

  • Adam says:

    Cory,
    I was simply following Rich Mogull’s line of questioning, which I find interesting because of the ‘compare and contrast’ nature.

  • From what I’ve heard controls were in place but malware was custom enough to not trip rules. That would explain how it slipped by IDS as well as anti-virus. There is still a debate about the attack vector at Hannaford, so we probably should not expect a quick answer. Furthermore, there are many good reasons why the attack vector is unlikely to be discussed in detail in public forums by those who have the answers.

  • Adam says:

    Davi,
    There are no good reasons to not discuss what happened.
    To put it another way, we can both assert all we like. If we offer up reasons and data, we bolster our discussion from ‘is! is not!’ to a debate. Absent such data, we end up exactly where we are in both this discussion and information security.
    Please offer up reasons.

  • Oh, I just noticed your response. You make my comment seem so insidious. I can think of a few reasons off the top of my head why discussions tend to be starved for details at first:
    1) Danger of early and incorrect assumptions. Those close to the case might want to do some out-of-box discussion and review with qualified investigators, but broad exposure in a general audience can create confusion and rumors that lead to false accusation, fear mongering or worse.
    2) Liability and confidentiality concerns (related to #1). Those closest to the breach are under binding agreements not to expose details or discuss until approved through oversight.
    3) Corruption in the investigation. There is sometimes a risk that exposure of details too early will lead to destruction of important audit trail data or disappearance of a suspect. This is especially relevant with an opaque attack vector.

  • Adam says:

    Yes, I point out that your comment reflects an insidious assumption of secrecy in information security. I believe that such a reflexive approach must be overcome if we as a field are to make progress.

Comments are closed.