There are insider threats, and there are insider threats. One type of insider threat can be classified as naïve. This refers to those employees who expose the company to cyber attacks through their irresponsible behavior. For example, an employee may have been told that public WIFI services are unsafe, but they log into their company network through them anyway. These are employees who probably don’t really believe that their actions could put the company at risk. They believe the dangers from a cyber attack are probably exaggerated. Nonetheless, such irresponsible employees can prove just as dangerous as the other type of insider threat; the malicious insider.
The malicious insider intentionally wants to harm the company for one reason or another. They may want revenge, or they may simply want to make money. Having access to the corporate network, they can access sensitive data and resell it or hold it for a ransom. They can even alter or destroy it if they want. Some of these individuals may have been openly recruited by hacking groups who will promise them a percentage of the profits from any successful hack. Ransomware-as-a-service (RaaS) works this way. There are also insiders who may be secretly working for a foreign government. These employees will seek out company secrets to benefit similar businesses in their homeland, usually at the behest of the government that employs them. The problem for companies, therefore, is not only unveiling an insider threat but, in determining if the threat they uncovered was planned or unintentional.
Insider threats are on the rise. This can be shown in the following graphs from Verizon depicting changes in the number of insider attacks in recent years.
Add to this the recent findings from a Gurucul report.
Even worse is the fact that most organizations (58%) admit that they are not confident they could detect an insider threat until it was too late. In short, companies and organizations are worried about insider threats but don’t really feel confident in dealing with them.
And this is where the F.B.I. comes in. They want to help enterprises identify malicious insiders before they can begin their attacks. Information on their IDLE (Illicit Data Loss Exploitation) program was given in a Private Industry Notification under the sharing restrictions of the TLP: Green protocol as shown below.
For this reason, details on it are not widely available.
That said, we do know that it is a program that wants to take a proactive role in fighting cyber attacks. However, in order to do this, the F.B.I. would need private companies and organizations to give them access to their networks and files; a step that may make some organizations more than a little nervous.
Keep in mind that the F.B.I. is not instituting this program out of pure altruism. They realize that foreign governments are targeting certain industries to gain access to secret information. The theft of such information could threaten U.S. security. A breach of a key company could also enable the criminals to use it as a platform from which they could work themselves into government networks. In other words, the F.B.I. wants to use these companies and organizations as a shield to protect itself and other government branches.
Think of the IDLE program as a way for the F.B.I. to implement sophisticated honeypots. For those who don’t know, honeypots are false files or folders on a network that are placed there to lure attackers away from valuable data. Once the attackers enter the honeypot, they can be identified and/or blocked from the network. The IDLE honeypots or deception platforms are more cleverly designed.
For legal reasons, the F.B.I. cannot officially call what they do as ‘designing honeypots’. If this were the case, an attacker, if apprehended, could use entrapment as a defense. Instead, the F.B.I. uses a different strategy. They watch the network browsing behavior of a user and, if that behavior exceeds certain parameters, if, for example, certain files or folders are accessed or downloaded, the user is flagged as being potentially malicious.
My guess is that the F.B.I needs to have access to a company’s network long enough to establish a baseline of normal activity. Once that is accomplished, they could use an algorithm to differentiate normal from abnormal network use. Flagged users could then be tracked down. Insiders could be questioned while outside attackers could be traced down or blocked. In other words, before any nefarious activity, such as downloading, could even begin, potential criminals could be identified and dealt with. This would seem to be a much better approach than that of waiting for an attack to take place.
However, even with all of this protection in place, sophisticated hackers could still manage to send files to a C2 server. In fact, the F.B.I. may even want them to do this, at least for fake documents that they have designed to appear legitimate. Why? Because they could place beacons in these documents. Beacons would enable them to trace the location of a stolen document and determine whether and how it has been used. Such beacons can be designed to fit into an invisible single pixel, but multiple tracking methods could be employed. Such methods are outlined in a research paper entitled, Baiting Inside Attackers Using Decoy Documents (Brian M. Bowen, Shlomo Hershkop, Angelos D. Keromytis, Salvatore J. Stolfo ) and I imagine the F.B.I. is well-aware of all of them.
Of course, some companies may be uncomfortable with allowing the F.B.I. into their networks. However, the F.B.I. claims that once installed, they will leave IDLE in the company’s hands. They do not need to maintain a presence on the network. That said, you would expect the companies to be obliged to inform the F.B.I. if a breach was detected. Many companies don’t like to announce a breach since this could affect their reputations and, by extension, their profits. This alone may make them reluctant to work with the IDLE program. My guess would be that the companies most likely to accept the offer would be those that have experienced previous breaches or those that detect frequent attacks. If, however, the IDLE program succeeds in disrupting a major attack and that success is made public, more companies would be willing to sign on.
Government cybersecurity agencies have made a number of attempts to reach out to private businesses in recent years but their success at gaining cooperation has been mixed. At the base of such alliances is the degree to which a government agency can be trusted. In fact, the number of organizations that sign onto the IDLE program can serve as a barometer for such trust. Then again, if the insider threat gets any worse, companies may have to look for government help whether they trust them or not.