Wouldn’t it be nice to know if you have an employee who is planning to steal your data or compromise your company in some way? Wouldn’t it be nicer if you knew when the employee was planning an attack? Well, wait no more. Algorithms have been developed to stop insider threats before they happen.
The most common insider threat is the accidental threat caused by naïve or irresponsible employees. The naïve employee may simply not recognize potential threats. They may make dangerous online decisions without realizing they are dangerous. Then, there are irresponsible employees. These are people who may know how to be safe online but choose not to follow established security procedures. The good news is that both these types of employees can be identified through simple algorithms. Their browsing and other online behavior can be monitored by algorithms which will detect dangerous behavior, much in the way a browser will detect that you may be visiting a dangerous site. The algorithm can then be programmed to give these problematic users a warning message. Most of these algorithms don’t monitor employees so much as they detect bad behavior patterns. Only later are these anomalous patterns matched to the individual employees.
The trouble with the above strategy is that it does not really stop a breach. The algorithm may be able to detect a pattern of bad behavior but, by the time it does, a breach may have already occurred. Of course, employees behaving badly can be singled out and educated and, with luck, this will solve the problem.
But what about the other type of insider threat; that posed by the malicious insider. These are employees who, for one reason or another, plan to use their ability to access valuable information and use this access to pursue nefarious goals. They may need money. They may be secretly working for people who want specific information. In other words, they are planning an attack. Is there any way to detect these harmful insiders?
Apparently, there is. Again, the answer lies in specially crafted algorithms. But how do they work? How do they identify a potential malicious insider when such an insider may have online behavior that is pristine? That’s where the problems begin. Aberrant behavior may be easy to detect. Aberrant employees may stay under the radar until it is too late.
In a 2019 research paper, investigators from Singapore’s Nanyang Technological University stated that “emotional/ psychological and social factors observed in social communication channels are critical in addressing early detection of insider threats.” They, therefore, analyzed employee emails that were publicly available in the Enron dataset. The diagram below shows how their algorithm worked. ABSA stands for aspect-based sentiment analysis. The idea was to focus on the aspect (emotions) expressed in sentences in the emails. The appearance of certain emotions could signal an impending attack. The context of the sentence had to be taken into account as well. For example, the sentences, “I hate this job” and “I hate this food” both express dislike, but the context gives the aspect of the first sentence more importance in terms of trying to weed out a potential malicious insider.
Using this algorithm, the researchers could develop an aspect or emotional profile for each employee. Thus, whenever a specific employee had a change in their emotional state, the algorithm watching their behavior would detect this change and report it to those monitoring employee behavior. The researchers posited that a malicious insider will be in an intensified emotional state just before committing their crime. This being the case, companies can focus on those employees displaying such characteristics in their emails or in information they glean from their social media postings.
But companies can only access certain information on employees. They may have the right to monitor company-based emails or read open social media posts, but they cannot legally cross some lines. One can imagine that if a company could monitor all of an employee’s online activity, such as private posts or banking activity, they would be in a much better position to determine if an employee had malicious intentions or not. Imagine if a company, or its algorithm, could monitor an employee’s bank account and find a large sum of money being transferred into it from a foreign entity. That would probably set off an alarm. But, most companies don’t legally have this power. If only there was some way around this obstacle. Well, apparently, there is. At least that seems to be the argument presented in a 2017 paper in support of a patent application by the security firm, ClearForce.
Before I begin this section, I should note that ClearForce has come under some scrutiny of late. Much of this relates to its appearance in a documentary entitled, Shadowgate, which professes to show how the so-called, deep state, has leaked or channeled government-acquired information into private firms. The accusation is that these private firms, often run by, or associated with, former high ranking intelligence community individuals, were able to use government developed algorithms, as well as information the government has amassed on U.S. citizens, to make profitable enterprises.
My own investigations into this accusation show uncomfortably close connections between private security firms and the government via the use of former high-ranking government officials as consultants. In short, many of these officials are profiting from the information or skills they’ve acquired from their time in government. Many of these people consult for many private firms and can make millions doing so. But sharing your expertise is one thing. Actively using government networks is another.
ClearForce is one of those insider-enhanced companies. Although it removed the link to the names of the board of directors on its website, you can still find that information in one of its news feeds. The board of directors now consists of General James L. Jones, USMC, (Ret.) former National Security Advisory and 32nd Commandant of the Marine Corps and General Michael Hayden, USAF, (Ret.) former Director of the CIA and NSA. The company was founded by General Jones’ son, James L. Jones III.
ClearForce is, therefore, in a position to gather large amounts of data on individuals which would make it easier for them to find insider threats. In their report, they show how they can legally gain control of employee data using certain legal loopholes.
First of all, the company can collect data from any device they give to the employee because it is technically the company’s property. Since they own the device, they can require that any employee who works for the company agree to being monitored in certain ways. Notice, in the diagram, that if the employee does not agree, a report is generated on that employee. My guess is that this automatically leads the company to be suspicious of this non-consenting employee.
But if the employee consents, notice all that they consent to. Basically, data can be collected from everything they do and all the networks they are connected to. This includes all social media data, all electronic communication, and all financial data. At the bottom of this flow chart, there is an algorithm that assesses all of the accumulated information and generates a behavior profile for the employee. After every bit of data is assessed, the algorithm, or people working with the algorithm, determine which portions of the data they’ve already used in the analysis may be legally protected. Is that legally protected data then deleted and the algorithmic assessment recalculated? Not at all. Apparently, there is no crime in accessing this private data, only in distributing it. Thus, this private data can be used to form a more complete profile of the employee. If the employee is found to be behaving irresponsibly, a report is generated and an alert is sent to the employee. However, the ClearForce report points out that, “the alert does not reveal to the user any references to the legally Protected Information which was used to process the alert.” So all the employee knows is that he has done something wrong but nothing about his private information being used to determine this. Here is more information from the report on how ClearForce uses protected information.
“However, to protect the employee’s privacy rights in compliance with federal and state laws, the alert that is generated and supplied (either alone or as part of a report) does not contain any of the legally Protected Information so as to avoid having the legally Protected Information improperly used by the users in deciding how to respond to the alert.”
What makes this more disturbing is the direct or indirect connections ClearForce may have to databases held by the intelligence community.
I have no doubt that ClearForce could develop a powerful algorithm to root out malicious insiders. I have some question whether false positives could ruin someone’s life. I also have to wonder what ClearForce does with the personal information they acquire at each of the companies that use their service. Do they destroy it? Do they channel it back to the intelligence community? We simply don’t know. However, at the expense of uncovering a malicious insider, employee trust of the company may be undermined. Employing such an algorithm is not without its risks, from high employee turnover, to loss of reputation, to lawsuits.
It’s still not clear how this algorithm will stop employees who behave well until they don’t. That is, when their behavior does not appear anomalous. Other technologies exist that will stop malicious insiders without compromising employee privacy, employer-employee relations, and the company’s reputation. There is no doubt that good endpoint security is vital in today’s paradigm shift from in-house to in-home work. No company wants to be compromised by bad insider behavior, but no company wants the cost of cybersecurity to be more than the cost of a potential breach.