Is the Turing Test Still Relevant?

Jason Lomberg, North American Editor, PSD



Jason Lomberg, North American Editor, PSD

South Park recently did an episode with a co-writer’s credit for ChatGPT (bear with me), and it did a great job summarizing the inscrutable nature of artificial intelligence and, with some minor extrapolation, why the Turing Test needs to evolve.

On the famously topical show, the kids use ChatGPT to answer text messages and write (overly) detailed research papers. And here’s the kicker – no one’s able to distinguish ChatGPT from “genuine” creative endeavors.

Great fun, but it begs an obvious question – if chatbots and AI tools can create pictures, essays, and even music that fool at least some people, is the original Turing Test relevant anymore?

The earliest Turing Test was a text-only evaluation, whereby a machine’s ability to pass as human would indicate true artificial intelligence. But we’re far beyond that now – many websites sport autochat help tools that’d achieve the letter (and spirit) of the Turing Test, and AI can easily pass as human in online, PvP gaming.

A discerning eye can pinpoint the jankiness of AI-generated people, but these quasi-artistic creations can easily pass muster. What once seemed impossible – AI exhibiting creativity (or faking it really well) – is now very real.

So if the original terms of the Turing Test are obsolete, where do we move the AI goalposts?

The Society of Automotive Engineers’ widely-cited autonomy levels is a good start, with Level 0 (manual operation) through Level 5 (full automation requiring no human interaction). Right now, most vehicles don’t eclipse Level 2 (partial automation), and the biggest obstacle to full vehicle automation appears to be humans – and AI’s inability to account for our irrational behavior – but even that’s changing.

So what’s the modern spirit of the Turing Test? Academia has its own definition for artificial intelligence, but what about the rest of us? What are we really trying to ascertain? Self-awareness? Sentience?

Again, academia is all over “artificial consciousness”, with Bernard Baars, formerly a Senior Fellow in Theoretical Neurobiology at The Neurosciences Institute in San Diego, CA (and author of “On Consciousness”) laying out various functions necessary for AI, including Definition and Context Setting, Adaptation and Learning, and Decision-making or Executive Function, amongst others.

Australian philosopher David Chalmers describes AI in more general terms, relating it to certain types of computations, not dissimilar to the human brain.

And that’s just scratching the surface – if studying artificial consciousness teaches you anything, it’s likely that AI is anything but objective truth, and machine self-awareness probably won’t have a qualitative end-state.

Either way, it’s time to toss the Turing Test in the dustbin of history and pick a new, universal frame of reference for AI.