First, an admission. I am not a GenAI expert. Then again, who is? Even the computer scientists, coders and product developers behind Copilot, ChatGPT and Bard have only essentially made the tools available – even they can’t be totally sure of what the future ahead of us looks like as the machine intelligence learns and expands.
It’s inevitable, in fact, that with a technology so new and that’s developing so fast, there are a whole clutch of unknowns alongside the things we can be sure of.
The known value of GenAI
What we do know is that GenAI has incredible potential. It could revolutionise the ways in which we all work and add huge value to what we do. Simply as a support tool to enable us to get things done more quickly, the power of GenAI is already clear. It’s akin to a ‘virtual assistant’ or, in human terms, a really smart intern who can find, extract and present whatever you’re looking for to you.
Whether it’s instantly finding and formatting specific technical information that you need, writing first drafts of documents or presentations, creating templates for policies, contracts or other assets, summarising and comparing source materials, or automating the compilation of meeting notes and actions – GenAI can be a powerful aid to us all.
In most of these cases, it will take you 60-70% of the way. The human needs to tidy it up, smooth out the rough edges and likely add some context to it to get it over the line. This does mean that people need to hone new skills to work effectively with GenAI – casting a sharp editorial eye over content, having the ability to critically review and assess. It also means that we need to guard against a ‘deskilling’ risk – where people become over-reliant on technology to do things for them and lose their own professional and curious edge.
Another thing that’s clear is that GenAI really isn’t so good (at least at the moment) for creative, imaginative, innovative content – that remains the preserve of human beings and on present evidence looks set to stay that way.
The unknowns
But there are unknowns too. The biggest of these – the elephant in the room – is how much GenAI will eat into all of our current jobs. How much human activity will become redundant because of it? It has become commonplace to say that AI will augment humans, not replace them; that it will enable people to spend more of their time on value-adding activity and take away the tedious time-consuming tasks.
I genuinely believe that is true. But nevertheless, there is always that fear that lurks around the edges – just how far will it go? There may not be a significant threat now or in the next few years – but who really knows what things will be like in 5, 10 or 20 years’ time…?
What excites us
Putting that aside, GenAI is probably the most exciting technology development since the creation of the internet. It’s moving so fast. Already, in the space of little more than a year it’s improved hugely from its beginnings. I’m seeing outputs from it that are truly impressive. It is also set to expand ever further into the infrastructure of how we work. It has already moved from being a standalone tool that you had to go to, to being stitched into our email systems and search engines. It is also set to become an increasingly shared resource across teams. Rather than people using it individually (and haphazardly), it will be like a common platform or portal that team members work from. It will be similar to how many of us have moved away from shared drives to shared environments like Teams with living documents and instant updates.
Another exciting possibility is that we could get GenAI talking to GenAI. In other words, one GenAI application asking another GenAI system to input or update on what it has produced. The results may not come back to the human user until it’s been through this machine conversation or conference – giving even better results.
What scares us
But this brings us back round to the unknown/fear side of the equation. Are we truly witnessing the birth of the rise of the machines? We talk about human roles being enhanced and augmented, but at what point do we actually mean ‘replaced? Then there are all the well-rehearsed issues of the accuracy of GenAI’s outputs given the amount of falsehoods and misinformation on the internet, hallucinations, bias, data security and privacy,
Certainly, one aspect that is scary at the moment is how few businesses are really geared up to manage GenAI safely. Nash Squared’s 2023 Digital Leadership Report that surveyed over 2,100 technology leaders around the world found that 42% felt unprepared for the demands of GenAI. Only one in five had an AI policy in place. Over a third didn’t have any plans to even attempt one.
Putting in the guardrails
In my view, a clear policy that sets out some basic ground rules and guidance over the use of GenAI is absolutely essential. For example, for externally-facing outputs there must be a ‘human in the loop’ to review and sense-check first. It is also crucial that staff understand that some GenAI applications put everything ingested into the public domain – so there must be necessary checks and balances in place to prevent commercially sensitive information, or customer confidential data, being inadvertently published.
At Nash Squared, we introduced an organisation-wide policy last summer and it has proven highly beneficial in helping people use AI tools productively and safely. It doesn’t just set the rules – it gives people the support they need and helps build their confidence in using it, as we trial and understand the usages of AI in our own business.
There is so much ahead of us as GenAI continues to evolve and develop. It truly is an exciting time to be in the professional working environment. But it is also simply essential that businesses put in place the right controls, guidelines and support mechanisms to ensure they can move and adapt with confidence, agility and speed.