Dark Side Of AI Putting Security At Risk


Everyone is talking about ChatGPT, the free chatbot based on artificial intelligence created by OpenAI. The non-profit artificial intelligence research organization promotes the development of friendly AI, i.e. capable of contributing to the good of humanity.

By accessing their website, you can virtually converse with a “virtual person”, an AI  programmed to answer any question, thanks to a sophisticated machine learning model with a high machine learning capability.

But what are the risks that this Chatbot can entail? 

ChatGPT has already attracted many cybercriminals, who, in the first place, have made almost identical copies of the site or app.

Downloading those from official stores and installing them on the phone, they can then spread malicious content.

The most serious problem, however, is another one: through specific and artfully built queries, GPT Chat is the perfect tool that, in the hands of an attacker, can help him to create what, in the cyber world, is called spear phishing attack.

They are, in fact, hyper-customized attacks, calibrated on the information that users, without realizing it, share on their social accounts and through daily navigation on PCs and mobile.

In this way, cyber criminals use AI to build deceptive content created specifically for the person they are targeting.

To counter this growing and increasingly insidious phenomenon, Ermes – Cybersecurity, Italian excellence of cybersecurity, has developed an effective AI system.

“Companies and employees, as it is accessing today with ChatGPT, will increasingly rely on third-party services or enabling technologies based on AI,” said Lorenzo Asuni, Chief Marketing Officer di Ermes – cybersecurity.

For this reason, we are monitoring and developing with Ermes a tool that certainly allows you to use them, but that does so safely through filters and blocks of sharing all that sensitive information such as email, passwords or economic data, that by mistake we can include in our requests to these services.

OpenAI ChatGPT and Scams, the three main risk factors:

  1. The number one scam, therefore, is the birth of phishing sites that exploit the hype on OpenAI ChatGPT, already hundreds in recent weeks alone. Recognizing them is not easy: they have similar domains, look almost identical to web pages or apps and often rely on non-existent integrations, creating duplicates of the service that steal, so, credentials to all those who register.
  2. Spear phishing attacks become easier and more scalable with the qualitative and fast production of highly targeted email campaigns (BEC), sms (smishing) or ads (malaware), aimed at economic scams, personal data theft or credentials;
  3. The sharing of sensitive company information, with the continuous demand for content, answers and analysis.

How does this happen? For example, with a simple “reply to this email” forgetting to exclude the email of the recipient or sender, or giving these new technologies economic data or names of customers or partners.

A practical example: Business Email Compromise, the risk for business emails

 ChatGPT responds excellently to any content query, but this becomes particularly risky when used as a business email attack, the so-called BEC.

With BEC, attackers use a template to generate a deceptive email, which prompts a recipient to provide him with sensitive information.

With the help of ChatGPT, in fact, hackers would have the ability to customize any communication, thus potentially having unique content for each email generated thanks to AI, making these attacks more difficult to detect and recognize as such.

Likewise, writing emails or building a copy of a phishing site can become easier without typos or unique formats, which today are often critical to differentiate these attacks from legitimate emails.

What scares the most is that it becomes possible to add as many changes to the prompt as “make the email urgent”, “emails with a high probability of recipients clicking the link” and so on.





Source link