Insider threat

Insider risk management: Balancing security and employee agility

November 3, 2021 by Kurt Ellzey

Insider threats can be difficult to nail down because it requires security to treat all authorized users as potential risks. The catch here is that to be effective, time is needed to create a baseline and even more to detect if something weird is going on- which could create a window for users to perform actions that can cause problems. Simultaneously, however, we don’t want users to feel like we are deliberately trying to reduce their productivity or that they’re working in a police state. There are some basic principles to follow when it comes to balancing security and employee agility.

What is risk?

Before we define an insider threat, we should establish a couple of core concepts. For example, what is risk in the context of information security? 

According to the National Institute of Standards and Technology (NIST), information security risk is defined as “The Risk to organizational operations (including mission, function, image and reputation), organizational assets, individuals, other organizations and the nation due to the potential for unauthorized access, use, disclosure, disruption, modification or destruction of information and/or information systems.” 

In most situations, organizations implement policies and procedures to try to reduce the amount of risk they take on. This can include group policies, file server permissions and physical security, among many other possible controls. However, once you start going beyond a certain point, the organization starts running the math. Organizations calculate risk by figuring out the Annualized Loss Expectancy (ALE). ALE can be worked out by multiplying a Single Loss Expectancy (SLE) (how bad a single event can be by the Annualized Rate of Occurrence (ARO)) and how often this event is likely to happen. 

If a solution can be found to reduce risk for less than the ALE, the organization can consider it. If it costs more, they then have to make a judgment call as to whether or not it makes business sense to proceed since the solution may be worse than the problem itself. If they decide not to go with the solution, then they are effectively accepting this risk for the time being.

What is an insider threat?

If you ask the average person what they consider an insider threat, you may have a dozen different definitions depending on their points of view. 

While malicious users definitely can be an insider threat, they certainly aren’t the only ones. Whistleblowers, for example, could be considered insider threats based on the definition, even though their motives may not be malicious. They may say that they are evil cunning people that are out to attack customers and the organization no matter the cost. 

A user that leaves a 30 character long password on a sticky note on their desk is not necessarily malicious. An employee who took sensitive data home after being dismissed from the company is a malicious insider threat. An employee who deliberately alters information for personal gain could be considered an insider threat. An employee who distributes Personally Identifiable Information (PII) in an insecure manner could be considered an insider threat.

At the top of the article, we were talking about insider risk. How is this different from an insider threat? Let’s look at it like this: a user has access to a file share with important data. The insider threat could be perceived as how they could get that data out of the organization and to another location. On the other hand, the insider risk could be considered what kind of damage that data could do if it were released outside of the organization.

Insider risk telltales

For example, a user is trying to email an Excel spreadsheet to an outside source. The file is very large, so they compress it into a zipped file. Zip files are prohibited from being sent, so it never goes through. The user then renames the .zip file to something like .ipz and then emails it out. Since this extension isn’t blocked, it works. This may or may not be considered a risk by some organizations, but it certainly is a possible hole in their protections.

On the other hand, we see a user’s machine attempt to create a connection to a file-sharing site outside our internal network. Users may have legitimate needs to share large files with other users without using a central file server, but is that the case this time?

Still, other users may be nice and hold a security door open for people when they have a tray of coffees. Is this coffee-carrying person supposed to be in this part of the building? Did the user think to ask?

Countermeasures and balancing acts

Information is most definitely our friend when it comes to identifying potential risks, such as sent email logs showing what sorts of files have been sent out. We can then have these logs analyzed by software like a Security Information and Event Management (SIEM) system. Servers of any sort can generate an enormous amount of logs, which means that narrowing down the noise from the actual actionable intelligence is critical. 

This can also be viewed as a form of independent policing for our teams. If someone with administrator-level permissions begins to use those rights in an unusual or not necessarily authorized way, it will start showing up in the logs. The same also goes if someone is attempting to elevate a standard user account to a privileged account. There is also a very particular action that we want to be on the lookout for logs being cleared.

In Windows, for example, the event and audit logs can be cleared manually should the need arise. Sometimes this is necessary due to the sheer size causing issues, but in some cases, they are cleared to cover tracks. Thankfully, however, a particular event is generated that cannot be easily removed whenever this occurs. 1102 or 517: ‘The Audit Log was Cleared.’ This event logs that the associated log was cleared, by whom and when. While it may not necessarily be a red flag all the time, it is something we will want to dig into whenever we see it, especially if a pattern begins to emerge.

We can also think about the amount of time a security door is typically open. If we have a baseline available for how long it takes an average user to open and then close the door, we can tell quickly (again, if we have access to the information) if a door was kept open longer than normal at a particular time. This allows us to cross-reference that with any available camera footage to see if there was a pattern to where this coffee person was going.

Firewalls can block access to all kinds of ports, sites, and services outside of our direct control. This can help protect our networks from a wide variety of attacks and threats but may first be viewed as a severe limitation by users. If we can make sure they understand that they are free to request access to any site they need to get to perform their duties, this could be considered far less restrictive than it otherwise would be.

Timing is everything when it comes to reducing threats around dismissals. If a user will no longer be with the company, and they have access to sensitive information, then their access will need to be revoked as soon as is reasonable. Communication between the department manager, HR, IT and Security needs to be consistent with policies to show how the manager has to tell the other departments that this user is no longer with the organization.

Solutions like this may not all be visible to users, but we need to make sure that they know about policies such as no piggybacking when it comes to doors and not make it seem like we are singling out individuals.

Mitigating insider threats and insider risks

Insider threats and insider risks require a balanced view. If the organization comes down too hard, it may end up causing users to be disgruntled and create the very thing they are trying to prevent. On the other hand, if they do nothing, it can be open season for outside threats to compromise users and their systems. 

Because of this, we need to have policies to show users what they can and cannot do and systems to enforce it. We need to have tools at our disposal to detect when unusual activity is happening, whether by a user or automated system. 

Finally, we need to resolve the situation by talking to the user to help them understand what is happening, stopping whatever unauthorized system is running or adjusting our metrics to compensate for something new that was not around when our baselines were created. With these factors in mind, we can keep a balanced view of risk and employee agency.



Posted: November 3, 2021
Kurt Ellzey
View Profile

Kurt Ellzey has worked in IT for the past 12 years, with a specialization in Information Security. During that time, he has covered a broad swath of IT tasks from system administration to application development and beyond. He has contributed to a book published in 2013 entitled "Security 3.0" which is currently available on Amazon and other retailers.