Bringing Bridget Jones’ diary into the AI era


June 7, 2024 was the 70th anniversary of the death of Alan Turing, the father of modern computer science, whose wisdom still resonates in fields ranging from mathematics to biology.

But I want to argue that in the age of ChatGPT, one aspect of his legacy—the famous Turing Test, also known as the ‘imitation game’, which benchmarks when a machine intelligence could be said to be ‘thinking’—is now outdated.  Turing’s idea, eloquently articulated in his renowned 1950 paper, proposed that a machine could imitate human responses so convincingly that an interlocutor would believe they were conversing with another person rather than a programmed system.

He suggested that this would sufficiently answer the question, ‘Can computers think?’ Turing predicted that this would be achievable in a matter of 50 years, but within just 14, a program called Eliza became so good at communicating that interlocutors confided all their secrets to it.

In some ways the Turing Test, then, was effectively passed during that era, but we were no closer to true AI. Even the most advanced ‘expert systems’ of the 1970s and 1980s suffered from software brittleness, excelling at specific tasks but unable to adapt to various use cases. 

However, a significant shift is underway with models like ChatGPT. These transformer-based neural network models have billions of parameters, trained from a vast corpus of information from the internet, which so simulate reasoning so effectively that it’s hard to argue they aren’t actually thinking.

But although they have achieved the conversational skills Turing outlined for passing his machine thinking experiment, they still aren’t our equals. Just ask ChatGPT to predict the gender of the first female President of the USA will be and you will see that common sense is lacking!

Stop worrying what’s underneath the bonnet

AI researchers have long harbored reservations about the Turing Test. In a world where rapidly improving AI is becoming ubiquitous, the question now is: should we seek a more robust benchmark for machine intelligence?

The reality is, we’re still far from true AI. Take Tesla Autopilot, for example; while it’s impressive, it’s now working by training its neural network based on examples of ‘good’ driving as selected by humans. It doesn’t possess the judgment or creativity of a human driver. By the same token, ChatGPT is fundamentally rooted in the information available on the internet. It can’t invent; it can only apply what it has been trained on autonomously.

That means that—aside from a few fringe AI researchers who believe LLMs are achieving sentience—despite their ability to ‘converse’ with us, these interactions remain limited in dimensions that a human would not be. It’s like dealing with a chatbot that can’t fully grasp your needs yet stubbornly refuses to connect you to a human who can.

It will be clear to anyone who uses these systems that, while remarkably proficient, they do not possess true intelligence. They excel at pattern matching and autocomplete algorithms, and their ability to generate coherent responses merely creates the illusion of genuine sentience.

Perhaps this mirrors how humans learn, but humans also possess the capacity to invent, beyond applying knowledge. It would be better, I think, if we stopped worrying (or hoping?) that our machine assistants are as intelligent as us or capable of thinking like us, and instead focused on maximising their usefulness and efficiency.

As we’re anticipating another film installment soon, consider Bridget Jones. Renowned for meticulously recording her dating experiences and calorie intake on traditional paper, I believe we are on the verge of having highly intelligent diaries that could have acted as much-improved, super-smart companions for her. These diaries could provide insights into exactly how much red wine she has unintentionally consumed this month, analyse Mark Darcy’s behavioral patterns to offer insights into his feelings towards her, and much more.

Would Bridget consider her smart diary a person? Perhaps, but I think she would see it as the latest convenience. Do you ever wonder if your vacuum cleaner has a soul, or worry about whether your TV has an inner life? No, you just care that they keep functioning and providing digital content 24/7, and I believe that’s how it should be.

We may not care if our helpers are like us

Bridget’s digital assistant resembles a type of sentient business document, much like what we already see emerging in business—self-aware, communicative content systems whose insights and content are so interconnected and self-aware that they appear ‘conscious’ to their users. 

Already, customers are using intelligent content automation systems that employ tools to read all incoming emails, categorise and comprehend embedded documents, and take intelligent actions based on them. In this regard, Turing was on target–the interface to intelligent helper systems will improve to the extent that we may forget they are machines and instead perceive them as almost human. However, I believe we won’t view them in that manner. Rather, our focus will remain on the valuable data, information, and guidance they provide for us.

I’d like to propose a potentially more compelling test to gauge if a machine we’ve created is at least as intelligent as humans: What if we provided an AI with $100,000 in investment money and tasked it with transforming it into $1,000,000? Maybe we’d get even more useful insights if we then asked it to use its winnings developing software to solve a problem?  



Source link