Code & Data Security
Last Thursday, JP Morgan issued a warning to 465,000 holders of prepaid cash cards that their personal information may have been accessed by hackers who attacked the JP Morgan network in July. JPMorgan detected the breach only two months later, in the middle of September.
At first glance, there was nothing exceptionally interesting about this piece of news. We hear news of such data leaks on a constant basis over the last few years. Behind each of these lies a fundamental failure to protect user data, ignoring basic security best practices. However, this was not the case here. According to the reports provided by JPMorgan, they have actually done everything right. All sensitive user content was encrypted in their database and all standard protection measures were in place. So what went wrong?
The problem originated from one developer who, through a common mistake, allowed this breach to take place. One of the components of the application wrote some of the information it used to a log file, which was later breached. This is of course common development practice as logs allow developers to identify problems in their software. The problem begins when developers are not sufficiently aware to what should and shouldn’t go into log files. It is quite obvious that the developer only meant to provide information needed for future debugging and troubleshooting, but in doing so he caused a serious data breach incurring subsequent costs for his employer.
This emphasizes what I’ve been preaching over and over, since my early days as an application security consultant –application security and data security go together hand in hand. You cannot implement data security without application security, as your application handles your most sensitive data on a regular basis. The JPMorgan example is something very common: from a pure policy perspective, all data security practices were followed – security controls verifying that the defined data repositories are encrypted were in place, as was a proper audit trail. However, the ad-hoc log files, which are internal to the application, were overlooked and never checked.
It is cases like this that emphasize the need of correlating data security and application security. As JPMorgan has properly secured their data, it is also very likely to assume they have spent substantial amount of resources securing the application. However, most application security solutions today focus only on the code, rather than looking at both code and data, and are therefore blind to issues such as in this case. A static code analyzer, unfortunately, would not be unable to identify such an issue, unless manually configured to test for it, manually marking the names of the fields and variables that contain sensitive data. Expecting developers to fine tune their static code analyzers to do so is as unrealistic. If they would’ve thought about it, they would not have caused the breach to begin with.
So, what can one take from this incident? Always remember the obvious reality – data security, and application security, are not standalone issues. Your applications are handling your most sensitive data, and your application security must therefore focus on how it interacts and affects your data. Relying on application security solutions monitoring only the code at rest, will simply not cut it.
At Quotium, we believe that data and application security are inseparable. For this reason, one of the most important tests our solution, Seeker®, does, is tracking all sensitive data throughout the application, identifying any potential leakage of such data, whether through a log file, as with the JPMorgan case, through insecure third parties or even by leaking it back to the user. That is why we were not surprised with the JPMorgan incident – we see these kinds of problems every day.
This post is also available in: French