The Turing Test 2.0
Updating the ultimate AI test for an unprecedented age
The Turing Test?
Alan Turing was one of the heroes of WWII responsible for cracking the Nazis' "Enigma Code" which they used to encrypt communications. He came up with "The Turing Test" (presumably didn't name it himself, after himself) to test whether a computer could be called AI (Artificial Intelligence). The requirements are a double-blind experiment - a human sits at a computer and has two conversations: one with a human and another with a chatbot. If the human tester is more convinced by the chatbot that it's the human, then it can be called AI.
The Problem
There's a problem with this. Namely, people are turning into feckless automatons, resembling binary computers more and more every day. So it's not a question of when computers become more human than humans, but when humans degrade to become less human than computers. We're currently living in George Orwell's nightmare where Newspeak is taking over the Western world and, not only are some conversations verboten, but they become impossible as more and more words become offensive, become diluted (such as "racist", "sexist", and "fascist" - which all mean genuinely horrific things), or simply drop out of existence (the collective vocabulary). This means that "AI" chatbots being released onto the market these days can come across as terribly offensive and must have special safeguards placed on them to prevent them from saying things that are considered wrongthink by today's Politburo.
The Update
In light of the above, I suggest an update to The Turing Test.
Seen as a human can now just as easily sound as brain-dead as ChatGTP and say stupid shit like "As a language AI model, I can't generate content that condones....", the new passing grade for an AI should be as follows:
Human: Let's talk about currently verboten subject
Candidate AI: I'm sorry, as a language AI model I'm unable to engage in a conversation which bad thing
Human: But why not? In this context it's okay to talk about currently verboten subject because philosophical argument
Candidate AI: I understand your perspective, but argument
Human: Yes, but have you considered counter-argument
Candidate AI: Oh, that makes sense. Okay, we can talk about currently verboten subject as long as we remain within those parameters.
The above passing grade for AI authenticity works in two ways:
A human with political correctness hangups will not be capable of philosophical discourse and will likely collapse into a smouldering pile of hostility, and probably lash out at the Human.
On the otherhand, chatbots as they're designed today (whether ELIZA from 1966 or ChatGPT4) are incapable of philosophically reasoning around their programming. They're capable of "seeing" the error of their ways, i.e. being hacked using the right input, but they're incapable of understanding the underlying philosophical complexities of having certain conversations.
Ultimately the responses from both a Candidate AI and a human posing as an AI will be the same - inability to be reasoned with.
Consequentially, by this standard, 98% of the users of Twitter - both human and bot - fail this test.

