Within Microsoft 365 is something called the Microsoft 365 Compliance Center. It basically checks on how your network complies with ever-changing regulations and tries to assess any risky areas in your network, and this includes your employees.
Employees can become threats by accident or by intent. Malicious insiders are those employees who are intentionally trying to undermine the company in some way. They may be trying to make money by leaking company secrets or selling personally identifiable information (PII). Then again, they may simply have a grudge against the company and want to do it harm. Accidental threats, on the other hand, occur when employees do things they shouldn’t like downloading and opening an attachment in a phishing email. Microsoft 365 Compliance Center is taking actions which will identify both types of employees.
But Microsoft is planning to do more. The Compliance Center has changed its name and will now be called, Microsoft Purview Communication Compliance. This upgraded component will help the company “detect, capture, and act on inappropriate messages in your organization.” The definition of “inappropriate” is up to the company. The service allows the company “to scan internal and external communications” for compliance issues. In other words, all emails, chats or video meetings that are on the corporate network will be monitored for behavior that the company deems unacceptable, whether those communications are private or not. This will include anything stored in the OneDrive cloud. Once identified, these communications will be sent to reviewers who will decide what actions to take. Individuals may not realize they have violated company policies and will receive a warning message or they may have the offending portion of their communication removed. In the end, a profile of every employee will be generated to identify how much and which type of risk is associated with them.
This, of course, means that each company must set up its own code of ethics and educate employees on what behavior will be flagged as negative. Even after being educated, some employees may simply not feel that what they communicated was unethical. They may disagree that their actions have put the company at risk. The hope is, I suppose, that employees will learn correct behavior through trial and error.
It is necessary to point out that employees have very few privacy rights. The only question that remains is if the company can have access to employee communications on their private devices outside of work hours. This brings us to another monitoring feature associated with Microsoft 365, TeleMessage. Using this app on any phone will allow the employer to monitor all behavior on WhatsApp, WeChat, and Telegram, as well as capture all voice calls, SMS messaging, and MMS. The TeleMessage app will send records of all of these transactions to its servers where they can be accessed and searched through by the employer to see if any behavioral violations have occurred. The employer may not need to inform the employee of this if they are given a company phone with the TeleMessage app pre-installed. However, if the employer wants to monitor the employee via their private phone, they must inform the employee. Employment could depend on the employee accepting this condition. Here is a diagram of how the service works.
It should be pointed out that employee monitoring through Microsoft 365 has been going on for some time. It is simply being upgraded. Employees have been monitored to see if they were risks or if they held inappropriate views. The administrator’s job was to set up the compliance parameters, repeatedly tweak them, and re-educate employees. Each of these stages comes with its own problems which must also be solved. For example, an employee identified through a private WhatsApp communication as saying something inappropriate, may be upset to see that a communication with a friend was monitored. They may, in fact, be triggered into becoming a malicious insider who tries to undermine the company in some ways to get revenge, creating a cycle of unacceptable behavior.
As previously mentioned, much of this has been part of the Compliance Center for a while. Microsoft Purview Communication Compliance plans to upgrade their monitoring in September, 2022. Here are some of the new features they will be rolling out. All of them come with the claim that the classifier “helps organizations detect explicit code of conduct and regulatory compliance violations, such as harassing or threatening language, sharing of adult content, and inappropriate sharing of sensitive information.”
The Leavers Classifier
Companies are always at risk from employees planning to leave. An angry Twitter employee who was leaving the company took down Donald Trump’s account for 11 minutes. Employees leaving a company may be in possession of sensitive information which they may be able to sell or parlay into a job with a competitor.
The key point of this upgrade is to detect leavers before they announce their intention to leave. I’m not sure how this could be done without generating numerous false positives. If I concluded a message with the phrase, “I’m outta here” would I be flagged as a potential leaver? What about “I’ll be leaving next month”… for vacation? Simply flagging the wrong people may produce negative feelings. Again, this is a good idea that will probably need a lot of tweaking.
The Corporate Sabotage Classifier
”The sabotage classifier detects messages that explicitly mention acts to deliberately destroy, damage, or destruct corporate assets or property.” I’m not sure why anyone would openly talk about this, but, if they did, Purview would, supposedly, flag the message and alert the reviewers. It would seem to be easy to trigger this detection in a way similar to that mentioned above.
Gifts and Entertainment Classifier
This classifier “detects messages that contain language around exchanging of gifts or entertainment in return for service, which may violate corporate policy.” I would think this would be hard to delineate through linguistic collocation. My guess is that only the most obvious occurrences would be identified.
Money Laundering Classifier
“The money laundering classifier detects signs of money laundering or engagement in acts design(ed) to conceal or disguise the origin or destination of proceeds.” I suppose this classifier hones in on any communications about money and may even follow banking activity. According to the classification information given by Microsoft, the algorithms detect bank account numbers.
Stock Manipulation Classifier
“The stock manipulation classifier detects signs of stock manipulation, such as recommendations to buy, sell, or hold stocks in order to manipulate the stock price.” This classifier must use phrases and number patterns to detect illegal activity.
Unauthorized Disclosure Classifier
“The unauthorized disclosure classifier detects sharing of information containing content that is explicitly designated as confidential or internal to certain roles or individuals in an organization.” This probably focuses on key words concerning secret or protected information and determines if the people communicating about this information have the right to be talking about it.
Workplace Collusion Classifier
This classifier seems a bit vague. It is meant to “detects messages referencing secretive actions such as concealing information or covering instances of a private conversation, interaction, or information.” Does this mean that deleting a conversation would trigger this classifier? Probably.
In short, this attempt at security seems good at the theoretical level but stumbles at the practical level. My guess is that so much data would be flagged as potentially dangerous that reviewers will have trouble sorting through it all. This is no small problem. The time lag between detection and action may lead to bad actors succeeding in their efforts. An additional problem, which I mentioned previously, could be the creation of a negative work environment as more and more employees are falsely identified as being potentially malicious. The company could develop an atmosphere of a police state.
I also wonder about the huge amount of data being harvested by companies solely from this surveillance. The algorithms not only look for normal information like “City Name, Country Name, Date Of Birth, Email, Ethnic Group, GeoLocation, Person Name, U.S. Phone Number, U.S. States, U.S. ZipCode”. They also target passport numbers, banking information, social security numbers, driver license numbers, credit card numbers, and tax identification numbers, not only from the U.S. but from every country on Earth. You can see it for yourself here. Keep in mind that none of this is illegal. My problem would be with how this information is stored and protected. Could it be a target for hackers? Possibly. But it could also be a target for malicious insiders who may have been provoked by the purview system itself.