by David Lukić, Cyber Security Consultant and Director of Security at IDStrong
AI technology permeates our lives, from the smartwatch on your wrist to automated watering timers for farmers. It is an essential tool for people in modernity, not only because we can order take out with a click; it also grants impressive public accessibility and further globalization of knowledge.
The latest and most interactive development for AI tech has been the recent release of ChatGPT.
ChatGPT is an AI model—capable of completing tasks with a conversational tone, eerily similar to humans. However, there are some cybersecurity risks to using it; before talking to your new best friend, learn about those risks and how they impact cyberspace.
The Gaining Popularity of ChatGPT
“ChatGPT” is shorthand for “Chat Generative Pre-Trained Transformer,” and it works in the same manner as online chatting. The difference between ChatGPT and chatting with a stranger online is that ChatGPT can fulfill tasks and generate information using a friendly conversation.
ChatGPT is noticeable in many aspects, but its ability to converse cleanly—as if it were human—separates it from other chatbot AIs.
Chatbot technology is one of the leaders for AI today because there are many ways the tech can fulfill needs. Yell out for Alexa, and she’ll answer back—set your reminder, and add eight cartons of eggs to your cart.
Chatbot tech provides consumers with better experiences through small and medium task fulfillment, like finding a movie to watch. Of course, the tech is also utilized within industries alongside other AI; a particularly strong AI-powered image generation engine (Dall-E 2) also broke the internet recently.
ChatGPT and Dall-E have another aspect in common—OpenAI created them both. OpenAI is a front-runner for advancing technology; they’ve received funding from Microsoft, Elon Musk, and other big industry names.
The funding from Microsoft has another element—ChatGPT will be integrated with the search engine Bing. Eventually, users will switch seamlessly between Bing’s search bar and ChatGPT’s interface; allowing for faster media consumption since potential questions can be asked and answered instantly.
Before this, Bing was less than the top search engine in the nation. After all, Google is so immersed in our lives that it has become a verb. However, the communicative abilities of ChatGPT are likely enough to suck some internet users into Bing’s territory. If enough end-point users hop the fence—is it the end for Google?
Does the Rise of ChatGPT Mean the End for Google?
For Bing to surpass Google’s market share, more than 80% of people who use Google would need to switch sides. Further, to end Google, another 15% of market shares would need to be absorbed by Bing or other engines—and the tech giant would likely sell out completely before that happened.
So, no, the rise of ChatGPT (even being used by Bing) doesn’t mean the end of Google; however, the technology has caught Google’s attention, much like a large, primary-colored Eye of Sauron.
Google also uses chatbot technology, though theirs is created with a unique AI language. It is called LaMDA, or Language Model for Dialogue Applications—it also removes the advertisements that make up 80% of Google’s revenue; this explains why it is not a widely-spread used tool. Until Google can capitalize on the technology without sacrificing its revenues, LaMDA won’t be a fronting feature of Google.
Google is one of the world’s biggest proponents, funders, and aggressive researchers of advancing technology. Their immense size, however, outlines their problems as well. Industry giants become giants because they can capitalize on a unique aspect of themselves; Google, having spent years as the world’s search engine, will face problems implementing and battling advancing technology.
The problems Google faces will be due to its inability to adapt. Smaller studios like OpenAI have more freedom to advance and push AI boundaries; Google would lean more into impact research or back off entirely.
Cybersecurity Risks of ChatGPT
ChatGPT isn’t perfect. It offers everyone new ways to intake and generate information, but it has some problems and potential security risks.
There have been growing pains involving reasoning, logic, factual determination, arithmetic, syntax, diction, humor, and some discrimination cases. It’s notable that although the AI collects and compiles data into consumable words, it cannot understand what it is communicating.
All these problems (and others) are a natural byproduct of ChatGPT’s progressive AI training. Essentially, the AI is “trained” by seeing humans’ responses to questions before eventually formulating its own. The boundaries of this training become clear when the AI starts to break.
However, when the technology works well, there is a dramatic increase in potential security problems for other companies. The AI’s complex tech can be taken advantage of, manipulated, and distorted to benefit bad actors.
ChatGPT’s security problems have two sides, one through the internet and one in the real-world; these real-world security problems come from more than the eventual rising of our AI overlords. They are problems produced by advancing industries.
For example, this technology poses a real threat to certain jobs and workforce areas. A few simple lines of plain text are enough to generate creative works that rival real-life artists, writers, web designers, and software developers. The internet experienced this issue when Dall-E 2 was released; many online artists and content creators were horrified by its implications.
The other real-world security risk ChatGPT faces an increase in disinformation and biases within its request fulfillment. ChatGPT has the potential to create new types of disinformation by command or mistake. Users can ask it to impersonate others or create fake news and conspiracy theories.
On the other hand, if the AI doesn’t know the answer to a question, it may attempt to “fill in the gaps.” Ironically, it exchanges “filled gaps” for mistakes—sometimes small, other times large; subsequently, it could generate and spread disinformation.
Online, ChatGPT risks more than the company’s cybersecurity. ChatGPT’s technology is enough to disrupt search engines. Search engines implement complex algorithms to return the most relevant information from a person’s search.
Some companies use online tools to assist with showing up higher in each search result; this practice is called SEO, or search engine optimization. The problem with ChatGPT, in this regard, is it can generate enormous amounts of information, swamping and manipulating search results.
The cybersecurity threats of ChatGPT are numerous, although different than what the laity may expect. One type of threat is an increase in phishing emails and messages. Usually, those who can identify scam emails and messages can do so because there are obvious signs.
Emails generated with numbers and bad grammar are dead giveaways for scam activity; however, with ChatGPT, those messages could suddenly start to look and feel like speaking with a real human. That will immediately put hundreds of thousands at risk—and encourage more identity monitoring than ever before.
ChatGPT is unsurpassed in its linguistic abilities, but another aspect should concern cyber authorities. Hitherto, if a malicious actor wanted to break into a network system, they would need to learn how to do so themselves.
However, with ChatGPT, there is room for cybercriminals to create aggressive and complex malware using simple commands. This could be malware strong enough to pierce firewalls or bulky enough to break a company’s security protocol. Security experts and law enforcement will have difficulty staying ahead of these actors.
How ChatGPT Collects Personal Information
Suppose you want to experience ChatGPT for yourself; make it write an essay before that midnight deadline. You’d first head to their website and create an account tied to your email and phone number. Emails and phone numbers are the first in a never-ending record of your interaction with the AI.
Records of everything said are kept, and this is fine. Speaking to a robot isn’t usually easy for anyone to bear their soul—but ChatGPT is different. Its linguistic abilities have the potential for people to forget they are speaking with an AI; that makes them more likely to tell it more personal and intimate details than they would otherwise.
ChatGPT’s information database is another way it may collect personal information. Its current systems hold over 300 billion words, hacked out of the internet by a cyber machete. ChatGPT cannot understand the information it produces, so naturally, some answers may lead to a social backlash.
The AI doesn’t know how to prevent rumors or harmful information that can damage a career or life; it only knows what’s in the system. Until there are big, stable blockades stopping this sort of information distribution, some people may have their lives ruined inadvertently.
ChatGPT has the Potential to Revolutionize Technology
Moreover, it can potentially disrupt the internet and the hornet’s nest, the search engine industry. As time continues, ChatGPT’s AI will likely progress—the world will need to adapt alongside it correspondingly. ChatGPT is already changing how we approach and understand things like coding, marketing, and learning; only time will tell what else will change.