New month, new data breach: Encryption is a Red Herring
Encryption is a Red Herring – Segmentation is the key to Effective Security
A new month, a new data breach. This time the Office of Personnel Management (OPM), a major U.S. government agency, had to reveal that on top of an initial breach of 4.2 million personal records, an additional 15 million records may have been compromised, holding detailed background investigations information. The implications for both compromised employees and the wider government infrastructure are significant.
In the inevitable high profile fall-out, the debate has raged about the lack of encryption at the OPM – despite the fact that encryption alone could not have prevented a breach on this scale because there was no effective and secure segmentation of users or data. As organisations make haste to avoid another such monumental breach of personal information, Paul German, VP EMEA, Certes Networks, warns against repeating the old mistakes and insists the OPM breach really reveals that it is time to think differently about security and embrace cryptographic user-to-application segmentation.
Every major data breach – and the OPM data breach was a doozy – prompts a huge array of theories regarding what could and should have been done to prevent it. When millions of personal records about government employees go missing, the investigations are both intense and high profile. From the extensively reported hearing by the House Committee on Oversight and Government Reform to calls for the OPM’s senior management to resign, this has been a breach that has played out in the public eye.
The general conclusion has been that the biggest issue was not the failure to block the initial breach but a lack of controls, time to detection and other safeguards that should have prevented intruders from obtaining any useful information. But the fact that the data stolen in this massive breach was not protected by data masking, redaction and encryption is something of a red herring. What the OPM breach really highlights is the continued problem of traditional network based segmentation – namely the ability to compromise a single user’s identity to gain access to a mass of cross-organisational information.
Yet in an era of continued evolution of the threat landscape combined with an increasing diversity and complexity of the underlying IT architecture, just how can a Chief Information Security Officer (CISO) impose greater control and achieve that essential user specific level of application and data control?
Security best practice
There are some aspects of security best practice that are now a given. A defence-in-depth approach that combines multiple layers of prevention and detection technologies, combined with procedural controls and policies is essential; user identification and access control is a standard tool to provide central administration and control; and intuitive intrusion detection tools are becoming increasingly key to identify when breaches occur before they have had time to gain vast swathes of data – although this latter issue is certainly one with which organisations continue to wrestle.
Other areas of security best practice remain opaque. And one of the biggest issues that continues to challenge the CISO is the need to segment sensitive and non-sensitive applications, or to segment networks into manageable areas that not only restrict access but also ensure that, should unauthorised access occur, critical applications and data are not compromised.
One fact, however, is clear: simply throwing encryption into the mix is not the answer. As OPM spokespeople have insisted, even if the information had been encrypted, it might not have been enough to stop attackers from getting usable data from this intrusion. According to the OPM, when an intruder has the credentials of a user on the network then data can be accessed even if it’s encrypted, just as the users on the network have to access data – which is what occurred in this case.
If, however, the OPM had had effective segmentation in place, this breach could never have reached this massive scale because the intruder could only have accessed that data and applications to which that user had been permitted, ensuring controlled access. Lateral movement from the compromised application into the more sensitive applications would have been prevented, effectively containing the breach and limiting its impact through segregation and compartmentalisation.
Software defined security
So how can that be achieved? The key is to leverage the power of encryption in a highly focused and targeted way to create a cryptographic flow between each user and each application. Building on the identity and access control technology widely deployed, a cryptographic relationship creates a clean and unbreakable link between each user and permitted data and applications. With this approach, an organisation can ensure that in the event of a breach the intruder cannot reach out beyond those defined limits/ privileges to access other restricted information.
One of the most compelling aspects of this model is that it removes the infrastructure specific constraints and embraces a new, software defined security approach. Applications and data are located across a hugely diverse infrastructure – from LANs to WANs, private to public clouds, mobile networks, the Internet and other environments. Segmentation techniques utilised in each part of this infrastructure are equally diverse and fragmented, with VLANs, IPsec, TLS, SSL, ACLs and a range of other tools all playing a role in segmenting traffic. This ‘segmentation fragmentation’ and the difficulty with configuring and managing it from end-to-end is the primary reason that effective segmentation is so rarely deployed in practice.
But with each specific ‘user to application’ cryptographic relationship, the infrastructure becomes irrelevant. The issue is: what applications/data should each user be permitted to access and how should they be permitted to access these applications? The answer to this question should then guide segmentation implementation that is oriented around users and applications, not the infrastructure.
Most importantly, this evolved approach to access control and application protection can now be driven by business rules and requirements, as opposed to being limited by what the infrastructure can deliver.
Taking this approach, the privilege escalation that occurred in the OPM breach simply cannot occur. Rather than relying on traditional network segmentation to control access, with this cryptographic relationship between user and permitted applications, if the user is compromised the intruder gets access to this permitted information – but no further. The intruder cannot use a single compromised user identity to gain free access across the board and hop laterally from one application to another containing more sensitive data.
There is a huge kneejerk reaction to this OPM breach, with demands that encryption is enforced across the US public sector to safeguard this critical data. But the risk is that organisations will make this entire process too complicated – and still fail to achieve the level of security required. There is a massive difference between encryption to the point of entry and using encryption to manage the relationship between a user, the devices that can be used, and the permitted applications wherever they reside. Encryption alone is not the answer. Instead, the solution lies in strong encryption married to identity and access management controls, aligned with applications and user access rights as determined by business rules.
Organisations need to start considering security in a different way – and it is creating that user to application specific cryptographic relationship that will be the key to, finally, preventing these huge –and continuous – breaches in vital data security.