New Tool to Defend Against ChatGPT Data Leaks


ExtraHop released a new tool called “Reveal(x)” that helps organizations understand their potential risk exposure from employee use of OpenAI ChatGPT by providing visibility into the devices and users on their networks connecting to OpenAI domains.

ChatGPT has become highly familiar due to its potential usage in every organization.

It only took 2 months for ChatGPT to reach 1 billion customers, whereas tik tok took eight years.

A survey taken by Gartner in 2023 revealed that 9 out of 10 respondents chose ChatGPT implementation in their organization by 2025.

Though it has a very high capacity that can help organizations speed up their progress, there are still Intellectual Property risks in using AIaaS tools inside organizations.

Recently, there have been many data leaks associated with ChatGPT. Users using ChatGPT for code reviews or any discovery or research share proprietary information with ChatGPT, which puts confidential data at risk.

In addition, ChatGPT stores the data in public domains and uses this information to answer other user requests. 

ChatGPT is an AI-as-a-Service (AIaaS) that can be widely used in productivity, software development, and research.

Reveal(x) – Data Protection from Rogue AI Use and Accidental Misuse

To overcome this risk, experts at ExtraHop have released Reveal(x), which will show the visibility of the devices and users inside the network that are connected to OpenAI domains.

This can help organizations to implement AI language models and generative AI tools with a lot of control over their data.

This information is crucial for organizations to identify the amount of data sent to OpenAI domains, which can help assess the risk linked with using AI services.

Security Personnel can validate the range of acceptable risks and decrease potential Intellectual Property loss. 

Technical Analysis

Reveal(x) uses network packets as a primary data source for monitoring and analysis during real-time detection.

Reveal(x) can provide this deep visibility and real-time detection because we use network packets as the primary data source for monitoring and analysis, Extrahop said.

EHA

It strips the content and payload sent from OSI layers 2-7 (DataLink Layer to Application Layer) for complete data visibility.

Though there have been several rules, regulations, and policies on how AI must store and use data, it is still essential for organizations to understand how to use these services.

ExtraHop Stated that “ExtraHop believes the productivity benefits of these tools outweigh the data exposure risks, provided organizations understand how these services will use their data (and how long they’ll retain it), and provided organizations not only implement policies governing the use of these services but also have a control like Reveal(x) in place that allows them to assess policy compliance and spot risks in real-time.

It is still undetermined how far an AI can go based on its capabilities and the risks it poses to data exposure. 

Shut Down Phishing Attacks with Device Posture Security – Download Free E-Book



Source link