Finding the right balance between ‘vibe coders’ and security

Finding the right balance between 'vibe coders' and security

In today’s digital workplaces, more employees are building their own applications by generating code using Low Code No Code (LCNC), Artificial Intelligence (AI) and Large Language Model (LLM) tools instead of manually writing lines of code.

These employees are called vibe coders. Vibe Coders prompt LCNC and AI tools using natural language prompts to generate code based on their unique requirements.

The first part of our two-part blog series, “The Hidden Dangers of Low/No-Code and Vibe Coding Platforms”, will focus on security challenges of LCNC and Vibe Coding. We share steps to ensure LCNC and Vibe Coded apps are safe for business use.

Vibe Coders can be citizen developers with no formal programming skills and can come from many different fields—such as healthcare, education, and finance. While they are experts in their own jobs, they are not always trained in security best practices.

LCNC development and Vibe coding enables employees to use technology to tackle everyday problems by themselves, without relying on their IT department or software development teams.

The role of vibe coders

Vibe coders bring fresh ideas and speed to businesses. However, because many of them don’t have formal IT or cybersecurity training, their work can put businesses at risk if not managed properly.

These people build apps to fix workplace problems without needing IT support. They use LCNC and LLM platforms to automate tasks and keep track of information. This also helps them enhance workflows they’re familiar with since they work with them daily.

Their work helps companies move faster, cut costs, and innovate without waiting in long IT queues. They help close gaps that IT teams often don’t have the capacity to address.

Key security challenges

There are security challenges that need to be considered so that vibe coders can create apps that are safe to use.

Lack of validation and auditing

Apps built by vibe coders often miss key steps. They may overlook field validation, error handling, and logging. Without these, the apps can become unreliable or unsafe. For instance, a vibe coder might create an inventory app. However, without proper controls, this could lead to duplicate inventory records and make app management difficult and take up cloud storage.

Risks of improper app sharing

Sometimes, vibe coders share their apps with other teams or other fellow vibe coders. But they often forget to set the right user permissions. If these apps aren’t secured with role- based access, sensitive business or personal data can be exposed.

Orphaned applications

Sometimes, when a vibe coder leaves the company or moves to a new role, the apps they created are forgotten. These “orphaned” apps can cause problems. They aren’t maintained, but they still run in the system, making them easy targets for attackers or accidental data leaks.

Unchecked AI-generated code

AI-powered code assistants are trained on large data sets, but they don’t always know the difference between a secure and an insecure pattern. This can lead to vulnerabilities, including problems like SQL injections, insecure authentication mechanisms, or leaking sensitive data.

There are risks when using AI tools of prompt injection attack, bias and hallucinations.

AI coding tools can suggest packages or code snippets that don’t exist. Slopsquatting, is where these AI tools suggest packages that don’t exist. Attackers may exploit this by registering those packages with malicious intent.

Intellectual property issues

AI-generated code or code suggested by LCNC tools pose the question of accountability i.e., who is responsible for the code that an AI tool generates? Is it the organization using it, the vendor, or the team deploying it?

Balancing innovation and security

To solve these challenges, businesses must put the right protections in place. Speed and creativity should never come at the cost of security. These include:

Training and security guardrails

Companies should invest in advanced security training and simulation training for vibe coders. The training needs to focus on potential security vulnerabilities due to insecure code patterns. Firms need to require employees to complete security training before they are allowed to build apps with LCNC or AI-generated code. Clear security guidelines help vibe coders create better safer apps quickly.

In his book “Cybersecurity, Psychology and People Hacking,” Tarnveer Singh emphasizes the importance of a human-centered approach in cybersecurity. This approach is essential in cybersecurity and involves designing systems and policies with the user in mind, taking into account human behaviour and psychology.

Additionally, Tarnveer notes in his book that human-centred approach involves creating a culture of cybersecurity within the organisation that includes promoting cybersecurity best practices and encouraging employees to report suspicious activity.

Implementing role-based access controls

Applying the principle of least privilege is key. This means giving people (and the apps they build) only the access they need—and nothing more. Apps should have built-in controls so users can’t see or change data they shouldn’t be touching.

Continuous monitoring

Automated monitoring tools can help track what vibe-coded apps are doing in real-time. If an app suddenly tries to access sensitive systems or transfer large amounts of data, security teams can be alerted and respond quickly. This monitoring helps catch problems early without requiring excessive manual work.

Faster, more innovative and secure

Vibe coders are helping organizations move faster and be more innovative. Their work meets real needs and improves everyday processes. But without the right security measures, they can also introduce new risks—often without realizing it.

A smart approach balances freedom and control. It includes training, security measures, ongoing AI-powered monitoring and observability, and careful access management.

Good governance, smart oversight, and secure development practices help organizations enjoy LCNC and Vibe Code innovation. They can do this without losing control over compliance or risk.

 

Aparna Achanta is an award-winning Security Architect and Leader at IBM Consulting with extensive experience driving mission-critical cybersecurity initiatives, particularly in federal agencies. Aparna specializes in securing Emerging technologies in federal agencies, including low-code, No-Code applications, and Generative AI applications.

Aparna is a member of the Forbes Technology Council and a passionate advocate for women in tech, Aparna is a founding member and speaker at the WomenTech Network and executive board member at Women in Cybersecurity (WiCyS) Austin Chapter. Aparna was named to the list 40 Under 40 in Cybersecurity by TopCyber News Magazine for her contributions to the cybersecurity community.

Aparna is on the advisory board of George Mason University’s Center for Excellence in Government Cybersecurity Risk Management and Resilience.

 

Tarnveer is an award-winning CISO with two decades of experience across security and architecture domains. He is currently CISO at The Exeter insurance and the Director of Security and Compliance at Cyber Wisdom Ltd. He is a thought leader known for blending strategic, ethical, and psychological perspectives on cybersecurity.

He has authored several seminal books, including:

  • Artificial Intelligence and Ethics: A Field Guide for Stakeholders
  • The Psychology of Cybersecurity: Hacking and the Human Mind
  • Digital Resilience, Cybersecurity and Supply Chains
  • Finance Transformation: Leadership on Digital Transformation and Disruptive Innovation
  • Cybersecurity, Psychology and People Hacking

Tarnveer’s books are recognized for raising awareness on topics ranging from AI risk and people hacking to digital resilience, and are widely considered essential reading for cybersecurity professionals. He is a Fellow of both the British Computer Society and the Chartered Institute of Information Security.

 


Source link