The must-knows about low-code/no-code platforms


The era of AI has proven that machine learning technologies have a unique and effective capability to streamline processes that alter the ways we live and work. We now have the option to listen to playlists carefully curated to match our taste by a “machine” that has analyzed our listening activity, or utilize GPS applications that can optimize routes within seconds.

In situations such as those, AI can feel harmlessly helpful, but AI’s capabilities do not end with benign, fun personalization features. When our phones begin “listening” to place specific and “helpful” ads in front of us, conversations about privacy must be started.

This is where AI’s highly debated risk-reward factor comes into play. A recent McKinsey report states that new data, intellectual property, and regulatory risks are emerging with generative-AI based coding tools; with increased speed often come security vulnerabilities that can crop up in AI-generated code, putting systems and organizations at risk, creating coding errors, governance vulnerabilities and more.

A study by Stanford University found that programmers who accept help from AI tools like Github Copilot produce less secure code than those who write code alone, concluding that while effective in speeding processes, these tools should be viewed with caution. Undoubtedly, AI-assisted code opens the door to potential issues that raise the need for advanced security practices within the enterprises that use them.

Navigating security during the citizen developer revolution

Despite the risks, developers in most industries are using AI in the development and delivery of code. In fact, according to GitHub and Wakefield Research, 92% of developers already use AI-powered coding tools in their work.

Low-code/no-code and “code assist” platforms are increasing the accessibility of AI to “citizen developers,” non-technical employees who lack formal coding education but are now using these platforms to create business applications. Gartner predicts that by 2024, over 50% of medium to large enterprises will have adopted a no-code/low-code platform. By making the development process more accessible to more employees, enterprises are seeking to execute a triple-play: solve software problems more quickly, reduce the strain on technical teams, and speed up AppDev innovation. Sounds great in theory; but in practice, we’re finding the risks run far and wide.

By utilizing AI-assisted features like code suggestions, citizen developers can harness the power of AI to craft intricate applications that tackle real-world challenges, while mitigating traditional dependency on IT teams. However, increased speed enabled by generative AI comes with an undoubtedly increased responsibility. While revolutionary, without proper security guidelines, AI-assisted code can expose enterprises to a myriad of threats and security vulnerabilities.

Adding low-code/no-code capabilities to the mix raises a heavy question for enterprises; are security processes that are already put into place capable of handling the influx of threats produced with the use of AI-generated or assisted code?

These platforms can obscure enterprise knowledge of where exactly code is coming from, opening the door to regulatory risks and raising the question of whether there are proper permissions associated with the code being developed.

Establishing guardrails that prevent chaos and drive success

According to Digital.ai’s 2023 Application Security Threat Report, 57% of all applications in the wild are “under attack” and have experienced at least one attack. Research from NYU states that 40% of tested code produced by AI-powered “copilots” includes bugs or design flaws that could be exploited by an attacker.

Low-code/no-code platforms inadvertently make it easy to bypass the procedural steps in production that safeguard code. This issue can be exacerbated by a workflow’s lack of developers with concrete knowledge of coding and security, as these individuals would be most inclined to raise flags. From data breaches to compliance issues, increased speed can come at a great cost for enterprises that don’t take the necessary steps to scale with confidence. Implications can be not only financial losses but legal battles and hits on a company’s reputation.

Maintaining a strong team of professional developers and guardrail mechanisms can prevent a Wild West scenario from emerging, where the desire to play fast and loose creates security vulnerabilities, mounting technical debt from a lack of management and oversight happening at the developer level, and inconsistent development practices that spur liabilities, software bugs, and compliance headaches.

AI-powered tools can offset complications caused by acceleration and automation through code governance and predictive intelligence mechanisms however, enterprises often find themselves with a piecemealed portfolio of AI tools that create bottlenecks in their development and delivery processes or lack proper security tools to ensure the quality of code.

In these situations, citizen developers can and should turn to their technical teams and apply DevSecOps learnings, from change management best practices and release orchestration to security governance and continuous testing, to create a systematic approach that leverages AI capabilities at scale. This way, enterprises can get the most benefit out of this new business process without falling victim to the risks.

The key for large enterprises is to strike the right balance between harnessing the potential of AI-assist platforms such as low-code/no-code and safeguarding the integrity and security of their software development endeavors to fully realize the full potential of these transformative technologies.



Source link