We are all unique individuals. We have unique fingerprints, unique voice patterns, and unique eyes. Cyber security experts realized long ago that if they could use these unique features to identify a person who wanted to use a specific, protected device or service, then that user would be protected from fraud. In other words, criminals could no longer pretend to be you by stealing your password. They would have to biologically prove that they were you, which is just not possible, right?
Well, in principle, this is true. The problem is in the analog-digital divide. As much as digitalization attempts to simulate analog reality, it is, at base, just simulated reality. It is, at base, reality simulated with computer code. This is called software. And, as most everyone knows, software can be hacked.
The difference between a hacker hacking with your password and hacking with your fingerprints is that a password can be changed and your fingerprints can’t be. Once a hacker has your fingerprints or any other biometric identifier, they are, in fact, you. This is why, in the midst of the current tsunami rush into biometric authentication, some people are raising the caution flag.
There are many physical ways that can be used to steal biological data. These include using photographs to fool facial or iris recognition systems and capturing fingerprints left on glasses. However, these physical hacking methods are not what concern me here. What I’m interested in is only the information stored on the computer after the physical interface (such as a fingerprint scanner) translates the physical information it receives into code, encrypts, and stores it. After the system registers your initial biological data, all subsequent authentication attempts will be compared to that standard. If the new input sufficiently matches the registered data, the user will be allowed to continue into a device, a network, or a specific site. This comparison is done with software that must be designed with some adaptability. Let’s say, for example, that authentication relies on voice analysis but you have a sore throat. Is your device good enough to determine that you are still you, or will it not let you use the device? But first things first.
Some banks are switching away from one-time passwords given through an SMS message because these messages have been hacked by certain malware-based attacks. In other words, the hackers, who have gained control of a device in some way, get the SMS password the bank sends. To stop this, some banks will call the user to give them a password, feeling that hearing the user’s voice would be evidence enough that they were the valid user. Unfortunately, a new version of the Android.Bankosy malware will (once installed on a device through a 3rd party, downloaded app) put the device in silence mode and forward all calls from the bank to the attacker’s number. Since the attacker already knows your bank password, as well as other personal information, they can pretend to be you. So your voice biometric won’t help in this case.
Other attacks can actually use your voice against you. Nitesh Saxena and his team from the University of Birmingham showed how stolen samples of a person’s voice could be used, with the help of morphing software, to reproduce the victim’s voice so that attackers could use it as their own. They claimed they could get samples of someone’s voice from listening in (recording) samples from their real-life conversations, as in a restaurant, for example, or from videos the victims may have put online. My guess is that a good, off-the-shelf, remote access Trojan (RAT), which can take full control of a user’s device, can simply remotely record the victim’s phone calls for use in future morphing. Once the attacker has control of the user’s voice, they can use it to authenticate themselves on any device or network that requires the victim to use such recognition. For example, some government buildings require voice authentication for entrance. With more banks and banking apps turning to voice biometrics … well, you can guess the rest.
As Robert Capps, vice president of business development at NuData Security, points out,
“While the use of these physical biometric factors have been a boon for physical security, the recent attacks on the Android platform show that many such factors quickly lose effectiveness on their own when a user is not physically present for authentication. To provide effective security using physical biometric authenticators, one must pair them with a comprehensive interpretation of real-time signals to evaluate risk, along with a deeper understanding of the actual user behind the keyboard using behavioral analytics.”
It still begs the question: Won’t hackers sooner or later be able to simulate whatever behavioral identification is in place? Besides, any malware that allows hackers to take complete control of a device won’t need to worry about fooling biometric security because, well, they have complete control of the device. They can, for example, simply shut down or alter such security whenever it may suit them.
This is why Intel and others are abandoning software security solutions and using the device’s hardware to thwart attacks. Intel’s explanation of its new Authenticate security sounds a little ambiguous, at least to me.
“With Intel Authenticate Technology, PIN, biometrics, keys, tokens and associated certificates are captured, encrypted, matched, and stored in the hardware, out of sight and reach from typical attack methods.”
Are they captured and then stored in the hardware or are they captured directly in the hardware? Intel doesn’t not make this or its Authenticate architecture clear. It never claims its security is hardware-based but, instead, refers to it as “hardware-enhanced”. Intel never claims that Authenticate will stop hackers only that it will make it “harder” for them. We know only that it works with the IT policy engine, which is software at a pre-network level. We also do not know much about the user experience and how it works with various attack vectors. This is not to say it is not a step in the right direction. It probably is. Any obstacle that can be placed in the way of attackers will make a network more secure.
However, some security experts don’t think the idea of putting security in the hard drive is a good idea. Mark Shuttleworth, an Ubuntu founder, claims that, “it’s reasonable to assume that all firmware is a cesspool of insecurity courtesy of incompetence of the worst degree from manufacturers, and competence of the highest degree from a very wide range of such agencies.” It is not clear how the Intel technology works, but it must somehow communicate with the operating system to allow a user who logs in (via biometrics etc.) to use it. Firmware has been compromised in the past through fake or compromised updates. The NSA’s Equation group used an exploit (referred to as The Death Star of the Malware Galaxy by Kaspersky) that could take full control of a device’s hardware. If an individual on a corporate network is compromised in such a manner, then the entire network with all of a corporation’s sensitive data is at risk. Other layers can be added at the hardware level that can prevent compromised endpoints from putting corporations at risk (see note below). Intel seems to think that hardware-based security is the next big thing, and, with the failure of software protection, they may, in fact, be right. Biometrics hold a lot of promise, there’s little doubt of that. With the proper matching of biometrics with a variety of hardware solutions, corporations and institutions may be on the verge of gaining the upper hand in the security wars.
(Disclaimer: This blog is associated with WorkPlay Technology, which does not invalidate the following information.) WorkPlay Technology separates one device into two devices at the hardware level. It places two operating systems (which can even be different operating systems) on one device. This means that the corporate network can be on one operating system (one side of the device) with the user’s personal data on the other. In the case that, for example, Intel’s biometric security is compromised, the malware still cannot cross the hardware barrier and infect the corporate side of the device.