Face morphing software can blend two people’s photos into one image, making it possible for someone to fool identity checks at buildings, airports, borders, and other secure places. These morphed images can trick face recognition systems into linking the photo to both people, allowing one person to pass as the other.
Face morphing software can blend photos of different people’s faces into a single synthesized image (Source: NIST)
This kind of software is easy to get. A morph can be made with phone apps, desktop graphics programs, or AI tools. Some tools do a better job than others. In some cases, the software leaves signs, such as uneven skin tone or unnatural details around the eyes, nose, lips, or eyebrows.
To address the problem, the National Institute of Standards and Technology (NIST) has published new guidelines on how organizations can use detection tools to catch morph attacks before they succeed.
The publication, Face Analysis Technology Evaluation (FATE) MORPH 4B: Considerations for Implementing Morph Detection in Operations (NISTIR 8584), explains morphing in simple terms and offers advice on how to respond. It is meant to help organizations set up detection systems in places where morphed photos might appear, such as passport offices or border crossings. It also covers what to do once a suspicious photo is flagged.
Since 2018, NIST has been testing software designed to spot morphs. The new guidelines mention the current state of detection tools but focus mainly on practical use cases.
A key distinction in the report is between two detection situations. The first, called single-image morph attack detection, happens when officials only have the questionable photo, such as when reviewing a passport application. The second, differential morph attack detection, happens when officials have both the questionable photo and a trusted photo, such as one taken at a border checkpoint.
Each method has strengths and weaknesses. Single-image detection can be very accurate, sometimes catching nearly all morphs, if the detector has been trained on the same type of morphing software. But accuracy drops sharply, even below 40%, when facing unfamiliar tools. Differential detectors are more reliable overall, with accuracy ranging from 72% to 90% across different morphing software, but they require a second genuine photo for comparison.
Most of the guidance focuses on how to configure detection systems and what to do after a possible morph is identified. Recommendations include a mix of automated tools, human review, and clear procedures for investigating flagged images.
“The most effective defense against morphs is to stop them from entering identity systems in the first place,” said Mei Ngan, one of the report’s authors. The guidelines suggest ways to prevent manipulated photos from being submitted during the application and document-issuance stages.
Ngan added that the team considered the pressures review officers face, including the large number of photos they process and the limited staff available to investigate.
“What we’re trying to do is guide operational staff in determining whether there is a need for investigation and what steps that might take,” she said.
Another goal is to raise awareness. “It’s important to know that morphing attacks are happening, and there are ways to mitigate them,” Ngan said. “The best way is to not allow users the opportunity to submit a manipulated photo for an ID credential in the first place.”
Source link