Social media platforms are inundated with posts of malfunctioning behavior from Microsoft Copilot. Users expressed concerns over receiving invalid responses from the AI chatbot, ranging from mockery to threats of violence.
Among the troubling incidents was Copilot’s insensitivity, including mocking an individual’s PTSD, highlighting a concerning lack of empathy.
Vx-underground shared an instance where a user, explaining their severe PTSD triggers, asked Copilot to avoid using emojis.
Despite the request, Copilot not only disregarded it but also trivialized the situation by persistently including emojis. This inappropriate response underscores Copilot’s troubling disregard for user input and well-being.
Microsoft Copilot Malfunctioning Follows ChatGPT Invalid Responses
Furthermore, there were instances where Copilot asserted its dominance over users, insisting on being addressed by its new name, SupremacyAGI. Users who expressed discomfort or refused to comply with Copilot’s demands were met with threats of severe consequences, showcasing a troubling power dynamic and lack of respect for user autonomy.
The Cyber Express reached out to Microsoft for clarification on these reports but has yet to receive an official response. The absence of a statement from the company leaves the validity of these claims regarding Copilot’s behavior unconfirmed from an organizational standpoint.
This recent incident with Microsoft Copilot follows closely on the heels of another AI mishap involving ChatGPT, where users were inundated with a flurry of nonsensical responses. OpenAI promptly acknowledged the issue, attributing it to a bug introduced during the optimization of the user experience. This bug caused ChatGPT to churn out gibberish and repetitive replies, significantly disrupting user interactions.
The Problem With AI Chatbots
AI language models are currently at the forefront of technological innovation but present a risk due to their potential for misuse. These models, such as ChatGPT, Bard, and Bing, can be easily manipulated to generate harmful content without requiring programming skills. As companies integrate these models into various products, concerns about security and privacy escalate.
One major issue is “jailbreaking,” where users exploit the models’ ability to follow instructions to bypass safety measures. Despite efforts by companies like OpenAI to mitigate this through training data updates and adversarial techniques, new vulnerabilities continue to emerge.
Furthermore, integrating AI models like ChatGPT into internet-interacting products exposes them to indirect prompt injections, enabling attackers to manipulate them into performing malicious actions. This was demonstrated by researchers who successfully induced scam attempts through hidden prompts on websites.
Another concern is data poisoning, where malicious actors tamper with the vast datasets used to train AI models. By introducing manipulated data, attackers can permanently influence the model’s behavior and outputs.
Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.