When synthetic identity fraud looks just like a good customer

When synthetic identity fraud looks just like a good customer

People may assume synthetic identity fraud has no victims. They believe fake identities don’t belong to real people, so no one gets hurt. But this assumption is wrong.

What is synthetic identity fraud?

Criminals create fake identities by combining stolen pieces of personal information such as Social Security numbers, names, and birthdates. This type of fraud is often called Frankenstein fraud because it stitches together real and fake components to form a new, convincing identity.

Fraudsters may act like trustworthy borrowers for months or even years to build credit and qualify for larger loans. Once they gain access to substantial credit, they max out the accounts and vanish. Because synthetic identities don’t belong to actual people, the fraud is hard to detect and even harder to prosecute. Many banks treat these as routine loan defaults, masking the true extent of the fraud.

According to Sumsub, synthetic identity fraud involving fake documents has increased by over 300%.

Why detection is so difficult

Detecting synthetic identities is difficult because it requires combining data from many sources. These include government records, credit bureaus, and phone companies. Often, these organizations do not share data well due to technical, legal, or privacy challenges. As a result, the information is incomplete or hard to access.

Synthetic identities are built to closely imitate real credit activity. For example, fraudsters may use fake Social Security numbers combined with real payment histories. This makes their records look real and hard to tell apart from genuine consumer data. Credit scoring systems and fraud detection tools often fail to spot these fakes.

Over time, synthetic identities reduce the accuracy of credit databases. This causes credit scores to be less reliable. Lenders then face greater risks. They may approve loans for high-risk borrowers or deny credit to trustworthy customers, which can lead to financial losses.

Who are the real victims?

Consumers: When criminals use stolen data, victims may face damaged credit scores, collection notices for debts they didn’t create, and long, costly processes to clear their records.

Financial institutions: Deloitte expects synthetic identity fraud to generate at least $23 billion in losses by 2030. These losses reduce profits and lead financial institutions to tighten their lending criteria.

Other borrowers: To recover those losses, lenders may raise interest rates or limit access to credit. This makes borrowing more expensive and difficult for everyone.

The economy: Fraud affects the accuracy of credit data, which weakens risk assessment tools. This leads to more bad loans and slower overall economic growth.

Children’s identities in synthetic fraud

Why do fraudsters go after kids? Because children usually have no credit history, no loans, and no alerts on their credit reports. This makes it easier for criminals to use their info to create fake credit profiles.

Fraudsters especially target children’s Social Security numbers. These numbers often aren’t active until the child is older, which means they can be paired with any name and birthdate to create a fake identity.

Parents rarely check their children’s credit reports, but doing so can spot fraud early and prevent years of damage.

It is also important to inform children about protecting their personal information and to be careful about what details they share on websites and applications.

Strengthening detection through collaboration

Businesses that are frequent targets of attacks should collaborate and share data with one another. Criminals using synthetic identities often operate across multiple organizations, so a shared database could help detect suspicious patterns.

Additionally, combining document verification, biometric authentication, and knowledge-based questions can strengthen identity verification and reduce reliance on any single method.

AI is changing the game on both sides

Fraudsters use GenAI to create hyper-realistic fake documents, including passports, ID cards, and biometric records. These forgeries can often fool automated verification systems and even human reviewers, making them dangerous in digital onboarding or remote identity checks.

On the other side, financial institutions fight back with AI and machine learning. As Blair Cohen, President of AuthenticID, explains: “AI-powered fraud detection systems leverage machine learning to identify fraudulent patterns accurately. For instance, anomaly detection algorithms analyze transaction data to flag irregularities indicative of synthetic identity fraud, continuously learning from new data and evolving fraud tactics to enhance effectiveness over time.”


Source link