AI-driven Deception and its Threat to Business
Deepfakes and disinformation are fueling fraud and business risks. Powered by advancements in large language models (LLMs), artificial intelligence can create realistic images, cloned voices, deepfake videos, and other forms of content. As such, anyone can create content with such ease that the World Economic Forum warns that, by 2026, approximately 90% of online content is will be synthetically produced. While many legitimate AI applications exist, the manipulation of AI-generated images, voice, videos, and text is commonly used by malign actors to sow disinformation campaigns. While this poses a significant challenge for society at large, businesses struggle to mitigate financial fraud, the impersonation of top executives, reputational damage, and related cybersecurity incidents.
Simply put, misinformation is the inadvertent sharing of false information, while disinformation is the deliberate sharing of false narratives with the intent of causing harm. Memes, made-up websites, and deepfake videos are some examples. To illustrate the gravity of these threats, the WEF’s 2024 Global Risk Report identified fraudulent information as the highest global risk, even ahead of climate and geopolitical tensions.
Disinformation can contribute to polarization, reinforce biases, manipulate public opinion, and erode trust in democratic processes, leaders, and institutions. Disinformation campaigns have been most commonly employed in elections or referenda to influence voter and poll results.
Although we might assume the revolution in technology is fueling the dissemination of disinformation, in actuality, human motivations and cognitive mechanisms are responsible for people’s interactions with disinformation. The wrongdoers are aware of this and therefore exploit cognitive biases to disseminate false information. Let’s take a look at a few reasons why humans fall for disinformation:
- Cognitive disconnect leading to overconfidence: Recentresearch from iProov revealed that only 0.1% of people could correctly distinguish between real and fake content, despite 60% believing they were proficient in detecting deepfake content. We fall for disinformation because we are overconfident in our ability to detect manipulated content.
- Manipulating emotions to manipulate perception: Increased emotional intensity, positive or negative, heightens one’s belief in disinformation and compromises one’s capacity to distinguish between disinformation and actual news. This also demands that social media outlets take more responsibility to vigilantly moderate, screen, or filter false content.
- The illusory truth effect: Repetition makes information seem more believable; a process called the illusory truth effect. Individuals are also more likely to share repeated information than new information.
- Confirmation bias: One of the biases inherent in most of us is confirmation bias, or the propensity to prefer information that agrees with our values and beliefs. We are much more likely to accept information that comports with how we think, or want things to be, and less likely to verify information that reflects our attitudes.
Deepfakes can potentially lead to disastrous consequences for businesses, eroding trust and security. Adversaries create realistic fakes of actual circumstances, individuals, and events to trick employees into engaging in identity theft, fraudulent transactions, and other malicious activities. A well-crafted and focused deepfake video may deceive or blackmail employees, contractors, or partners into conducting or enabling a system breach. In an oft-cited example, a finance staff member in a Hong Kong multinational company wasduped into sending over $25 million to scammers while on a video conference with his deepfaked chief financial officer.
Due to the plethora of social media platforms and personal content easily accessible online, malign actors have a frightening amount of information about potential targets. AI tools can produce deepfakes to micro-target individuals, tailor to their likes, dislikes, and concerns. Fueling the fire are the LLMs that assist in crafting believable phishing campaigns, which when used in conjunction with deepfakes, can result in large scale exploitation.
Companies have largely ignored the effect of deepfakes on business decision-making. Managers are increasingly unable to distinguish between real and deepfake-manipulated content, resulting in strategic mistakes, financial losses, and reputational harm.
The threat of disinformation is, in fact, a form of cognitive warfare. Technological defenses have a role to play in countering deepfakes and disinformation, but it must not be the sole weapon in the arsenal. Organizations must cultivate a culture centered on security that promotes cognitive resilience among employees. They can do this in a few ways:
Conduct employee training: Implementing training and security awareness initiatives that motivate users to verify the source and content of information and identify red flags before responding and sharing. Periodic simulated phishing exercises will train employees to be more skeptical, identify signs of manipulation including unconventional speech patterns or visual inconsistencies, respond accurately to social engineering scams, and stay informed about AI-fueled threat tactics.
Encourage critical thinking: Companies need to encourage users to improve their critical thinking skills that will help foster a stronger security culture, counter manipulative threats, and enable users to question and critique the information presented to them rather than accepting it at face value.
Deploy an incident response plan: The IR plan should have measures for curbing the spread of misinformation and deepfakes, with awareness of the reputational risk at stake. Analysis of incidents will help organizations refine their future response plans and revise training programs based on employee input and insights gained.
To counter disinformation, one needs to have a thorough understanding of the same cognitive factors that adversaries deploy in their attacks. Organizations must implement a mix of technological defenses, user awareness campaigns, and regulatory policies to protect against such manipulative attacks.
Source link