The integration of Artificial Intelligence (AI) tools into our daily routines has become an undeniable global phenomenon. However, as these AI tools undergo version upgrades, users’ concerns regarding data security are on the rise.
One notable advancement in AI technology is the introduction of features like ‘Upload File,’ now offered by platforms such as Microsoft ChatGPT and others. This feature allows users to upload word and excel documents to conversational bots for expedited results. Nonetheless, the act of uploading data outside the confines of a company’s network has raised significant data security apprehensions among experts.
According to security researchers at Menlo Security, there has been an observed 80% increase in attempted file uploads from July to December 2023. This surge is directly associated with the introduction of the new file upload feature, heightening concerns about data security among users.
The paramount concern for companies utilizing these generative AI tools is data loss and protection. A notable incident in March 2023 involved OpenAI, where a data spill exposed records of over 1.2 million subscribers from a telecom company.
It is imperative for organizations to strictly prohibit the upload of Personally Identifiable Information (PII) data and instill awareness among employees regarding this policy. Heightened awareness can mitigate common occurrences of data spills resulting from inadvertent copy-and-paste actions, thereby averting significant risks.
Similarly, providers of autonomous conversational bots must exercise caution and monitor the data being uploaded onto their AI platforms. They should actively discourage or regulate the upload of sensitive information. While some companies have already invested in technologies aimed at identifying such breaches, the widespread availability and unrestricted use of such solutions remain topics for future discussion.
Ad