Were you unable to attend Transform 2022? Check out all the summits in our on-demand library now! Look here.
In today’s highly competitive digital marketplace, consumers are more empowered than ever. They have the freedom to choose which companies they do business with and enough opportunity to change their mind at a moment’s notice. A misstep that diminishes a customer’s experience during registration or onboarding can cause them to replace one brand with another, simply by clicking a button.
Consumers are also increasingly concerned about how companies protect their data, adding a new layer of complexity for businesses as they aim to build trust in a digital world. Eighty-six percent of respondents to a KPMG study reported growing privacy concerns, while 78% expressed fears related to the amount of data being collected.
At the same time, increasing digital adoption among consumers has led to a staggering increase in fraud. Companies need to build trust and help consumers feel their data is protected, but also need to deliver a fast, seamless onboarding experience that truly protects against back-end fraud.
As such, artificial intelligence (AI) has been hyped as the silver bullet for fraud prevention in recent years for its promise of automating the process of verifying identities. Despite all the chatter surrounding its application in digital identity verification, there are still a lot of misconceptions about AI.
MetaBeat will bring together thought leaders to provide guidance on how metaverse technology will transform the way all industries communicate and do business on October 4th in San Francisco, CA.
Machine learning as a silver bullet
As the world stands today, true AI does not exist where a machine can verify identities without human interaction. When companies talk about leveraging AI for identity verification, they’re really talking about using machine learning (ML), which is an application of AI. In the case of ML, the system is trained by feeding it large amounts of data and allowing it to adjust and improve, or “learn,” over time.
When applied to the identity verification process, ML can play a game-changing role in building trust, removing friction and fighting fraud. With it, companies can analyze huge amounts of digital transaction data, create efficiencies and recognize patterns that can improve decision-making. But getting caught up in the hype without really understanding machine learning and how to use it properly can reduce its value and in many cases lead to serious problems. When using machine learning ML for identity verification, companies should consider the following.
The potential for bias in machine learning
Bias in machine learning models can lead to exclusion, discrimination and ultimately a negative customer experience. Training an ML system using historical data will translate biases in the data into the models, which can be a serious risk. If the training data is biased or subject to unintentional bias by those building the ML systems, decisions may be based on preconceived assumptions.
When an ML algorithm makes incorrect assumptions, it can create a domino effect where the system consistently learns the wrong thing. Without the human expertise of both data and fraud researchers, and oversight to identify and correct the bias, the problem will repeat itself, thereby exacerbating the problem.
New forms of fraud
Machines are good at spotting trends that have already been identified as suspicious, but their crucial blind spot is novelty. ML models use data patterns and therefore assume that future activity will follow the same patterns or at least a consistent pace of change. This leaves open the possibility that attacks can succeed simply because they have not yet been seen by the system during training.
Putting a fraud assessment team on machine learning ensures that new fraud is identified and flagged, and updated data is fed back into the system. Human fraud experts can flag transactions that may have initially passed identity verification checks, but are suspected of being fraudulent, and feed that data back to the business for a closer look. In this case, the ML system encodes this knowledge and adjusts the algorithms accordingly.
Understand and explain decisions
One of the biggest detriments to machine learning is the lack of transparency, which is a fundamental tenet of identity verification. You must be able to explain how and why certain decisions are made, as well as share information with regulators about each step in the process and the customer journey. Lack of transparency can also create mistrust among users.
Most ML systems provide a simple pass or fail score. Without transparency in the process behind a decision, it can be difficult to justify when regulators come calling. Continuous data feedback from ML systems can help companies understand and explain why decisions were made and make informed decisions and adjustments to identity verification processes.
There is no doubt that ML plays an important role in identity verification and will continue to do so in the future. However, it is clear that machines alone are not enough to verify identities at scale without adding risk. The power of machine learning is best realized together with human expertise and with data transparency to make decisions that help businesses build customer loyalty and grow.
Christina Luttrell is CEO of GBG Americas, which consists of Acuant and IDology.
Data Decision Makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people involved in data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.
You may even consider contributing an article of your own!
Read more from DataDecisionMakers