AI Tools have become extremely popular in the software industry and are currently in the initial phase of adoption by other industries as well. There have been several AI-based projects that are gaining popularity day by day.
However, the security and risks of these AI and AI-based projects seem extremely concerning.
An analysis from the OSSF (Open Source Security Foundation) among 50 LLM (Large Language Models ) /GPT (Generative Pre-trained Transformer) shows that the security posture of these projects is extremely low.
Mature Vs Immature Projects
While these AI-based projects are reaching users widely, attackers lay their eyes on them, making them prime target.
In addition to this, the security posture of these popular AI-based projects is extremely low, which can have a high success ratio for a data breach.
The analysis provided insight into these AIs which shows the average rating of these 50 LLM-based projects is 15000+ stars.
These projects have an average age of just 3.77 months which makes them extremely immature.
In the case of security, these projects have a score of just 4.6 out of 10, which means that these projects are extremely weak in the aspect of security.
However, these AI Tools are bringing in new threat vectors as well as posing a threat to the existing risks.
It is high time for all the projects to make security the priority.
Graph Line of Popularity
In addition to this information, the analysis was also an eye-opener for the popularity graph between these immature AI-based projects and matured long-term projects. These LLM projects gained their popularity in weeks, while the mature projects took years.
Early adopters of AI did not have concerns about security in their Software Development Life Cycle which poses a great security risk to current adopters of these LLMs. Rezelion has published a complete report of threats about these LLMs.
“AI-based email security measures Protect your business From Email Threats!” – Request a Free Demo.