GRC Tuesdays: In God We Trust, All the Others We Monitor!
Sorry to disappoint you, but no, NSA’s mission statement is not “In God We Trust, All the Others We Monitor” but something much less controversial, albeit more ambitious I find: “Defending our Nation. Securing the Future”.
Regardless, I felt this urban legend was a perfect introduction to this blog where I wanted to share a few thoughts on how companies could address the “insider threat” issue.
Insider threat – be it with malicious intent or due to erroneous actions from a current or previous employee, contractor, etc. is regularly cited as the primary source of data breach (cf. GRC Tuesdays: Efficient Cybersecurity Response Requires Profiling of Data Breaches).
Despite this finding that is far from being new and is regularly raised by expert analysts, many companies are still in the early maturity phase of mitigation of this risk.
In this short blog, I’d like to suggest a few options that, combined, can help companies better protect the information held in their systems. Hence not monitor the perimeter, but the data itself.
Of course, my first suggestion would be to recommend an access governance process that defines with precise policies the user’s accesses and authorizations, but I’d like here to go a step further as I think the access governance approach is already perfectly understood by most organizations.
Lock the information
The first step that you might want to consider is to limit the attack surface by reducing the risk of leaking sensitive data. To do so, why not mask specific data that is not required by users to perform their daily tasks? In short: work on a need to know basis. This protection could be associated to roles or via attributes for more granular and precise rules so that unmasking requires explicit access rights. For instance, does an IT helpdesk operator helping you troubleshoot your access to your insurance portal needs to be able to access your detailed medical information?
Consider this to be the cruise control on your car. If set – and unless you decide to overwrite it, you won’t get caught speeding.
Log users’ actions
In some cases, masking information is not possible since users precisely need to access it to perform their work. Here, the idea is to keep data accessible, but log access for further analysis. We’ll come to this point in the following bullet point.
This will help ensure compliant data access but also enable the fast and undisputable identification of irregular data access if needed.
Continuing my automotive analogy from earlier, this would be the speed camera. If you knew that there was a speed camera active on the portion of the road you are driving on and that it will flash your registration if you fail to comply with the speed limit, would you really drive in front of it at high speed?
This “logging” option might seem appealing but could also mean that Cybersecurity departments could rapidly be overwhelmed by logs to analyse in case of large organizations or companies that deal with personal information as course of most processes. Indeed, this increases the risk of false positives being raised regularly and in large quantities, and therefore could require a lot of manual review.
Monitoring the logs to identify anomalies and extent of a breach
This brings us to our 3rd and last bullet point: monitoring.
Using all the logs from above, but also potentially logs from other solutions, Cyber experts could automatically run detection patterns to identify anomalies. With the right activity monitoring tool, finding the needle in the haystack actually becomes possible without having to burn the entire haystack and go over the ashes with a magnet.
Further, this helps investigators identify and stop the perpetrator(s) in a timely manner and, if the malicious actions have already been carried out, will also help them get the scope of the data breach for notification to relevant parties, including impacted customers and regulators.
This of course, can include logs of actions performed with temporary super-user status (AKA FireFighter IDs) since this is one of the most sensitive access as it grants the ability to change critical information directly in a productive system.
Most companies of course track these privileged accesses, but some aren’t able to get a precise information of what has really been performed – except in the report made by the super-user himself after the fact. These companies therefore have to rely on the perfect execution (no error made) and good faith (no malicious intent) of the super-user operator. Both of which are part of the definition of an insider threat.
Looking for additional suggestions?
There are of course many organizations that publish frameworks, recommendations, etc. But, since I used NSA’s fake moto as a honey trap to get you to read this blog, I’ll render to Caesar the things that are Caesar’s and suggest you browse through the NSA’s Cybersecurity Advisories & Technical Guidance site. One of my favourite assets they release is the Top Ten Cybersecurity Mitigation Strategies since it’s very succinct and pragmatic.
What about you, how does your organization manage the insider threat topic? I look forward to reading your thoughts and comments either on this blog or on Twitter @TFrenehard