Hello ChatGPT, RIP software developer?


This year has seen a remarkable explosion of interest in generative AI. Go back just six months, and most people had never heard of it. Now, it seems as though everyone – from tech professionals to students – is using it and experimenting, in particular, with the “poster child” ChatGPT.

This has also generated increasing discussion of whether AI and GenAI spell the end for human roles, including in software development. Does ChatGPT and its GenAI cousins (such as Bard, Copilot and Bing Chat) mean the demise of the software developer – given that GenAI can already generate code snippets?

A good first step in considering this question is… to ask ChatGPT. As always, it gave an instant, eloquent answer, and the thrust of its response was “no”.

ChatGPT and similar language models, it wrote, “are unlikely to replace software engineers entirely”. This is because software engineering “involves much more than just natural language processing” and requires a range of skills such as problem-solving and collaboration that are “beyond the scope” of GenAI. However, it did also say that GenAI “can automate certain aspects of software development” and “augment the capabilities of software engineers.”

Clearly, then, it’s not a case of “RIP software developer” – or at least, not yet. But at the same time, there is no doubt that the advent of GenAI will bring about significant changes and disruptions.

Field of opportunities

It’s helpful to divide the impacts into opportunities and risks. So, first the many positives.

GenAI will significantly speed up certain aspects of the software development process. It can already manage some of the lower-level tasks, such as entry-level code writing, code snippets, testing and documentation. It can also write excellent comments in code – something software developers tend to be less enthusiastic about doing.

And that’s just as at today. These capabilities could (almost certainly, will) rapidly improve and increase going forward. What catapulted ChatGPT into the mainstream was its ability to leverage modern advances in computing power that allow large language models (that underpin ChatGPT) to be trained in a relatively short period of time. The potential for GenAI to leverage advances in quantum computing could open up even more opportunities for developers.

Who knows what agile will look like when GenAI really develops? Imagine sprint cycles of not two weeks but two days or even two hours. The speed and productivity could be beyond our wildest dreams. The challenge will be handling all the code generated, curating it and managing it.

In short, there is massive potential to do things faster, potentially more cheaply, and to spend more of the human time involved on the higher-end, value-adding aspects. These could all be huge positives in terms of productivity and client delivery.

There’s no question that senior developers and engineers will always be needed – they are where art meets science, they have the experience and the know-how and the creativity/problem-solving ability to bring it all together. The role of the programmer and indeed business analyst will still be to elicit requirements from clients that can be converted into prompts for GenAI to generate, test and document code snippets. These will still need to be woven into the fabric of the overall solution.

It’s also worth remembering that to get the most from GenAI, you need well-constructed prompts. In a way prompting GenAI is a protocol all of its own, akin to pseudo-code. GenAI at the moment does not remove the need for thinking about how a software solution needs to be structured, which for the most part is the biggest value a software developer brings.

A range of risks

We must be honest and say that GenAI does represent a threat to junior roles and entry level tasks. However, in a way this is no different to the new frameworks and automation tools that keep appearing in the market. It’s a factor the IT industry is already used to living with.

It may be more pronounced with GenAI, however, as clients may expect software firms and consultants to reduce the size of their (human) teams due to the fact that they can use GenAI – either that or get jobs done faster (or both).

An inflection point may therefore be coming. But tech has always been resilient and adaptable. It always reinvents itself. No doubt new jobs and roles will be needed to support GenAI (prompt engineers, for example) that many junior team members can fill. GenAI will be a disruptor, but the industry will embrace GenAI as it has other advances in computing science.

While the industry will not only survive but thrive, the level of disruption GenAI has caused and will continue to create may be a step too far for some digital leaders. They may decide to step back, allowing a new crop of leaders to step forward.

But there are other threats or risks that need to be managed, not just those around jobs.

There is arguably a danger of stagnation if GenAI can only generate code similar to the code used to train the underlying models. Will it ever be able to make innovative leaps forward? This is where human intelligence will likely always be needed and will remain at a premium.

Then there is the risk of error. There have already been cases of GenAI being “confidently wrong”, suffering from “hallucinations” caused by a lack of data, dirty data, or other constraints/errors in the models. GenAI’s outputs, therefore, need to be checked, tested, validated etc – another area where humans are likely to keep their jobs.

It is essential that this happens. Otherwise, the speed at which false and/or malicious information (or code) could spread could be truly frightening and have potentially serious consequences.

Advice to digital leaders, consultants, practitioners

So where does this leave everyone in or connected to the IT industry? My five key pieces of advice are as follows.

1. Try it out. Actively engage, test it, and experiment. Don’t be held back by the fear factor.

2. Be transparent within your business and with your own clients that you’re using or trialling GenAI. After all, they probably are too. This can open up valuable exchanges of experience, insights and sharing.

3. Trust, risk, security – these are the three key lenses to assess GenAI through. Stay focused on those. Can you trust the outputs; is it introducing any substantial risks; is it safe and secure?

4. Treat GenAI like any other tech you’ve used, implemented or experimented with in your career. Apply the same principles and best practices that have always guided you.

5. Don’t try to build your own – the cloud providers are all developing a host of applications and services, so make use of those. We are entering the “GenAI as a service” era.

As an example of how we are approaching GenAI at NashTech, we’re actively trialling it and are about to start using a ChatGPT product, which we have developed and fine-tuned ourselves based on GPT4, for first-line tech support for clients.

A place to thrive

These are exciting times. No one can really be sure of the scale of the changes to come, but we can be sure that they’re going to be significant.

The advent of GenAI has parallels with the advent of low-code/no-code in the last decade. Although low-code has replaced bespoke software development in many ways, there are actually more software developers now than a decade ago. This is because low-code/no-code, and also GenAI, will always focus on the low-hanging fruit of software development.

Once those fruits have been picked, forward-thinking enterprises are keen to climb the tree further for the next innovation, and usually that requires bespoke work. How high is the tree and will GenAI/low-code ever get to the top of it? We think very, very high, and no.

So, there is no danger of “RIP software developer”. Tech will remain a place where humans can thrive, building on the ever-more sophisticated outputs that AI brings us.



Source link