Securing financial data of the future: behavioral biometrics explained
Some of us would be pretty excited about a brave, new passwordless world. Gone would be the days of having to write down 27 passwords and post them beside monitor screens. Or having to yell them out loud to a colleague on the other side of the room.
For banks and other financial institutions, a world without passwords may not be the end-all-be-all scenario they exactly had in mind. They have realized that passwords, even with the aid of two-factor authentication, don’t do enough to protect sensitive client information from today’s digital threats.
Since then, they have been on the lookout for innovative ways to efficiently secure customer accounts. It didn’t take long for them to start considering biometrics—the measurement and analysis of a person’s unique characteristics.
While a range of banks have already adopted some biometric modalities, recently more and more financial institutions are beginning to take notice of behavioral biometrics due to its:
- Flexibility. It can be tailored to an organization’s specific needs.
- Convenience. It requires no specialized hardware, and doesn’t negatively impact user experience.
- Efficiency. It functions in real-time and can be used alongside other modes of authentication.
- Security. It’s inherently difficult to replicate or steal.
Let’s get to know this promising biometric modality even more, shall we?
What is behavioral biometrics?
Also known as behaviometrics or behavior-based authentication, this is a dynamic form of authentication that looks into a person’s behavioral patterns—the way they interact with systems and technologies—to identify users.
In the financial sector, this is used to continuously ensure that the person in a transaction with an online bank, an eCommerce site, a payment app, or a multi-factor authentication service is who they claim to be from the time they log in to the time they log out.
In using this mode, one doesn’t need more passwords to memorize or download and save a set of backup codes that they can retrieve should something goes wrong with their password. As behavioral biometrics is generally passive, meaning one will not realize that it is there, users can do their financial transactions continuously and securely without having to take additional action.
Outside the financial sector, behavioral biometrics can be applied to any piece of modern computer technology capable of interacting with systems that allow the user to do tasks without interruption and protect potentially sensitive data in real time. Here is a scenario on how this will look like in practice:
Sarah sends $25 to her flatmate, Cindy, via a popular P2P payment app as contribution to this weekend’s grocery run. They have planned on having a 90’s horror movie marathon, and she invited her cousin, Robin, to tag along. While Sarah and Cindy are preparing meals in the kitchen, Robin is left in the living room to pick out the films.
Robin saw Sarah left her phone on the couch, unlocked. After sneaking a look at the kitchen entryway, she grabbed the phone and loaded up her cousin’s P2P app, intending to send money to her own account because her mom didn’t have much in her wallet when she checked it this morning. Robin has planned a night out with friends tomorrow, and if Sarah realizes the transfer, she’s confident that her cousin would understand.
After Robin confirms the money transfer on the app, a message pops up on the screen saying she isn’t authorized to do that. And this is because the biometric software installed in the payment app has recognized the mismatch of Robin’s behavioral pattern against Sarah’s unique profile.
Shut the front door! Can it really be that accurate?
Terrifically and terrifyingly so, which is great for consumers and financial institutions. The scenario above is hypothetical and may not be as convincing, so here’s a real one: From 2012–2013, a European bank implemented a behavioral biometric scheme to their online banking service. Behavior data were collected from a limited number of clients per session for several months.
A session began when a customer logged in to the online banking portal and ended when the customer logged out. The transactions they completed while logged in could range from using a one-time password to regular banking transactions like checking accounts or transferring money.
As behavioral biometrics worked behind the scenes, clients went about their business generally with little to no disruption from the biometric scheme. After the trial run, they assessed the accuracy of the data and determined that the clients during those sessions were recognized as the correct users 99.7% of the time.
This means that if User A logs into an online bank with a behavioral biometric implementation, the system can validate that, based on their behavior patterns, the user logged in is still User A from beginning to end of a session.
And a 99.7% accuracy is said to be significantly high compared to that of other biometric modalities.
This almost sounds too good to be true. How does it work?
The key to the accuracy of behavioral biometrics is machine learning, or computer systems having the ability to automatically learn for themselves and improve data without being explicitly programmed. A user’s behavioral patterns, such as the way one holds their smartphones, moves their mouse, or swipes their finger on a tablet screen, are measured and recorded using sensors (or the accelerometer and gyroscope if we’re talking about other devices like smartphones).
Advanced software algorithms then analyze all collected data to create a profile for the user. This profile is then used to continuously check against a user who is in an online banking session, as we have seen in the above case study. And that user could either be the true user, an imposter, or a bot.
The true user will almost always match their profile, while it would be impossible for an imposter to mimic their victim’s behavior. Bots generally fail to demonstrate measurable human responses, and don’t respond at all to additional random passive behavioral tests injected within a session. That said, a system can spot bots the easiest.
As of this writing, behavioral biometrics is used for continuous authentication, risk-based authentication, insider threat detection, and fraud detection and prevention.
Wait. Isn’t it a privacy violation when a product starts collecting users’ patterns of behavior?
When it comes to biometrics, privacy is a huge concern, and this particular modality isn’t immune to them.
Unlike traditional forms of biometrics that gather and store physiological characteristics of a person, such as their fingerprint or iris scan, behavioral biometrics collect user data that cannot be associated with a particular individual. A person’s fingerprint can identify them, but how they move their mouse pointer or hold their smartphone cannot. Behavioral biometrics also doesn’t need to know who you are, where you live, what bank your savings are at, or what your account credentials are for it to be sure that you are the same user who logged on last time.
According to the International Biometrics + Identity Associate (IBIA), the kind of data collected by behavioral biometric applications is the data already being received by device or network operators under standard privacy laws. But as behavior data can be classed as non-personally identifiable information, the FTC, state governments, and the US Congress are considering regulating and restricting their collection and sharing.
This is mainly due to the practice of behavior monitoring and targeting for the purposing of online advertising, not for the purpose of authentication or as an added security layer. The Electronic Frontier Foundation (EFF) published a legislative primer about concerns and solutions for this kind of behavior monitoring that you can read here.
What providers of this modality can do is to educate their users about the security benefits of behavior-based authentication, to be transparent about how the data being collected is used, and give users the option to revoke the usage license of their biometric data. IBIA has already asserted that behavioral biometrics providers are looking for novel approaches to addressing privacy concerns.
Are there any disadvantages to behavioral biometrics?
Regardless of how unreal behavioral biometrics may sound, the fact is this modality—like the others—isn’t perfect. Voice or speaker recognition, signature analysis, and keystroke dynamics are methods of behavioral biometrics, and they each have weaknesses. Sometimes, they even pose additional security risks.
For example, voice authentication can be circumvented by obtaining a high-quality recording of the target’s voice to be played back in the future. Background noise is also an issue when it comes to registering one’s voice for authentication. Keystroke analysis and signature recognition have low accuracy rates and can readily be affected by the user’s physical and emotional disposition, respectively.
Note that the behavioral biometric scheme employed by current providers for financial systems do not use the above forms of modalities. As of this writing, there is no literature on the disadvantages of AI-driven behavioral biometrics.
What happens if a user changes the way they interact with their device?
We mentioned earlier that behavioral biometrics is a dynamic form of authentication. By this we mean that it doesn’t only accept and remember one biometric entry, such as a fingerprint. Instead, it receives and recognizes the number of ways users perform an action and other notable characteristics. All these (and possibly more) are analyzed and used to create a profile for a single user.
Changes in a person’s behavior naturally happen, even when a person doesn’t realize it, and this has been accounted for when measuring behavior. Let’s also not forget that behavioral biometrics coupled with machine learning continuously check for the identity of a user in a session. The slight difference in behavior won’t lock out someone from their account.
What other threats do behavioral biometrics claims to address?
In the financial sector, behavioral biometrics can provide protection against account-sharing fraud, account takeover attacks, new account fraud, malware, and some RATs. One provider even claims it can help protect against ransomware.
Behavioral biometrics can also be used to nip incidences of insider threats in the bud.
Not yet ready to replace passwords
Behavioral biometrics in the financial sector is relatively new and continues to improve and mature. At this point in time, it doesn’t claim to replace passwords or two-factor authentication. It is, however, another supplement to the bigger picture: a layered security approach to protecting sensitive user data. So as much as we’d like to reach that ultimate dream of an utterly passwordless society—and this is entirely possible in our lifetime—we may have to wait a little bit longer. After all, the groundwork is prepped, and we’re seeing organizations beginning to build on it.
Behavior-based authentication can potentially change a lot of things, and we can expect a new trend emerging in user verification soon. And with it comes possibly new and interesting challenges for the cybersecurity industry to tackle.
The post Securing financial data of the future: behavioral biometrics explained appeared first on Malwarebytes Labs.