A popular saying is: “To err is human, but to really foul things up you need a computer.”
Even though the saying is older than you might think, it did not come about earlier than the concept of artificial intelligence (AI).
And as long as we have been waiting for AI technology to become commonplace, if AI has taught us one thing this year, then it’s that when humans and AI cooperate, amazing things can happen. But amazing is not always positive.
There have been some incidents in the past year that have made many people even more afraid of AI than they already were.
We started off 2024 with a warning from the British National Cyber Security Centre (NCSC) telling us it expects AI to heighten the global ransomware threat.
A lot of AI related stories this year dealt with social media and other public sources that were scraped to train an AI model.
For example, X was accused of unlawfully using the personal data of 60 million+ users to train its AI called Grok. Underlining that fear, we saw a hoax go viral on Instagram Stories that told people they could stop Meta from harvesting content by copying and pasting some text.
Facebook had to admit that it scrapes the public photos, posts and other data from the accounts of Australian adult users to train its AI models, which no doubt contributed to Australia’s ban on social media for children under the age of 16.
As with many developing technologies, sometimes the race to stay ahead is more important than security. This was best demonstrated when an AI companion site called Muah.ai got breached and details of all its users’ fantasies were stolen. The hacker described the platform as “a handful of open-source projects duct-taped together.”
We also saw an AI supply-chain breach when a chatbot provider exposed 346,000 customer files, including ID documents, resumes, and medical records.
And if the accidents didn’t scare people off, there were also some outright scams targeting people that were eager to use some of the popular applications of AI. A free AI editor lured victims into installing an information stealer which came in both Windows and MacOS flavors.
We saw further refinement of an ongoing type of AI-supported scam known as deepfakes. Deepfakes are AI-generated realistic media, created with the aim of tricking people into thinking the content of the video or image actually happened. Deepfakes can be used in scams and in disinformation campaigns.
A deepfake of Elon Musk was named the internet’s biggest scammer as it tricked an 82-year-old into paying $690,000 through a series of transactions. And AI-generated deepfakes of celebrities, including Taylor Swift, led to calls for laws to make the creation of such images illegal.
Video aside, we reported on scammers using AI to fake the voices of loved ones to tell them they’ve been in an accident. Reportedly, with the advancements in technology, only one or two minutes of audio—perhaps taken from social media or other online sources—are needed to generate a convincing deepfake recording.
Voice recognition doesn’t always work the other way around though. Some AI models have trouble understanding spoken words. McDonalds ended its AI drive through order taker experiment with IBM after too many incidents, including customers getting 260 Chicken McNuggets or bacon added to their ice cream.
To sign off on a positive note, a mobile network operator is using AI in the battle against phone scammers. AI Granny Daisy uses several AI models that work together listening to what scammers have to say, and then responding in a lifelike manner to give the scammers the idea they are working on an “easy” target. Playing on the scammers’ biases about older people, Daisy usually acts as a chatty granny while at the same time wasting the scammers’ time that they can’t use to work on real victims.
What do you think? Do the negatives outweigh the positives when it comes to AI, or is it the other way round? Let us know in the comments section.