The Artificial Intelligence race has started since the release of ChatGPT in November 2022. There have been several Artificial Intelligence bots developed by organizations all over the world.
In this rising occasion, there has been a halt in developing AI systems into more powerful ones. An open letter was signed at the end of March warning about the potential risks of having the AI systems if they are out of control.
Apple, Twitter, and Deepmind joined hands to sign this letter and agreed to train AIs for at least six months before developing them further.
In a recent interview with BBC, Steve Wozniak, Co-founder of Apple, stated that threat actors can use this technology to conduct a much more sophisticated attack on organizations.
“AI is so intelligent it’s open to the bad players, the ones that want to trick you about who they are.” said Steve to BBC.
He said he was not concerned about AI replacing people due to a lack of emotions. Instead, threat actors can conduct attacks with more convincing, strategic, and intelligent text using AI systems like ChatGPT.
He also said, “Human really has to take the responsibility for what is generated by AI.”
With the increasing use of AI and its intelligent answering capacity, training them is more important in every aspect of technology.
“We can’t stop technology but we can prepare the people to be cautious about malicious attempts to steal personal information.”
Artificial Intelligence can become one of the greatest inventions mankind has ever seen or one of the major failed experiments depending upon how they are being used.
In a recent report, Japanese Cybersecurity experts have found that ChatGPT could write code for malware by entering a prompt that makes the AI believe it is in developer mode.
Hackers also Exploited ChatGPT’s Popularity to Spread Malware via Hacked FB Accounts. The threat actors have compromised these pages and profiles, and the most shocking thing about these pages and accounts, they have more than 500k active followers.