Dan Dennett

GPT-3 AI Successfully Mimics Philosopher Daniel Dennett

An AI trained with OpenAI’s GPT-3 model is pretty good at mimicking philosopher Daniel Dennett, according to an experiment by philosophers Eric Schwitzgebel, Anna Strasser, and Matthew Crosby. With Dennett’s permission, the researchers used GPT-3 to process millions of his words on AI, human consciousness, and related philosophical topics and found that it wasn’t always easy to distinguish real quotes from comments by the AI.

AI Philosophy

The AI model was trained using answers from Dennett on a range of questions about free will, whether animals feel pain and even favorite bits of other philosophers. The researchers then asked different groups of people to compare the AI’s responses and Dennett’s real answers and see if they could tell them apart. They used responses from 302 random people online who followed a link from Schwitzgebel’s blog, 98 confirmed college graduates from the online research platform Prolific, and 25 noted Dennett experts. Immersion in Dennett’s philosophy and work didn’t prevent anyone from struggling to identify the source of the answers, however.

The research platform participants only managed an average success rate of 1.2 out of 5 questions. The blog readers and experts answered ten questions, with the readers hitting an average score of 4.8 out of 10. That said, not a single Dennett expert was 100% correct, with only one answering nine correctly and an average of 5.1 out of 10, barely higher than the blog readers. Interestingly, the question whose responses most confused the Dennett experts was actually about AI sentience, specifically if people could “ever build a robot that has beliefs?”Despite the impressive performance by the GPT-3 version of Dennett, the point of the experiment wasn’t to demonstrate that the AI is self-aware, only that it can mimic a real person to an increasingly sophisticated degree and that OpenAI and its rivals are continuing to refine the models so that similar quizzes will likely get harder to pass.

“I want to emphasize: This is not a Turing test! Had experts been given an extended opportunity to interact with GPT-3, I have no doubt they would soon have realized that they were not interacting with the real Daniel Dennett. Instead, they were evaluating only one-shot responses, which is a very different task and much more difficult,” Schwitzgebel wrote in discussing the experiment. “Nonetheless, it’s striking that our fine-tuned GPT-3 could produce outputs sufficiently Dennettlike that experts on Dennett’s work had difficulty distinguishing them from Dennett’s real answers, and that this could be done mechanically with no meaningful editing or cherry-picking. As the case of LaMDA suggests, we might be approaching a future in which machine outputs are sufficiently humanlike that ordinary people start to attribute real sentience to machines, coming to see them as more than “mere machines” and perhaps even as deserving moral consideration or rights.”

GPT-3 Academics

This experiment is only the latest demonstration of how GPT-3 and rival AI models can perform human conversational tasks, regardless of any philosophical questions of consciousness. The philosophy experiment evokes the recent work by an AI researcher to create an academic paper written by GPT-3 about its ability to write such a paper. With a little aid from humans, the paper came out coherent, if not brilliant, and a little assistance by humans in the philosophy responses might have made the Dennett quiz impossible. Upgrades like the InstructGPT default model OpenAI released this year have further improved GPT-3’s abilities by making it better at matching user intent to task requests and eliminating most nonsense answers. An AI philosopher mimicking one or more humans doesn’t seem very far-fetched, though how original it could be in its musings is debatable.

  

GPT-3 AI Writes and Submits Academic Paper On GPT-3 AI

AI Dungeon’s Synthetic Story and Pictures Released on Steam Gaming Platform

OpenAI Debuts New GPT-3 Model to ‘Align’ With Human Intent