Ai Socrates

Episodes like this are no longer hypothetical. They reveal a hard truth: large language models offer fluent answers, but their confidence can mask deep uncertainty. This article explores why that happens and what a Socratic approach can teach us about fixing it.

Socrates encounters AI for the first time

Picture the scene. The great philosopher Socrates sits outside his home in Athens. It's the fourth century BCE, and often regarded as the father of Western philosophy is pondering some of life's great questions when something odd ambles into view.

It's a small device, an electronic computer, although Socrates cannot know what that is. On its surface, two words are printed: "artificial intelligence." Socrates is confused, but being of an inquisitive mind, he reaches out to the odd little thing.

"Hello," Socrates says. "So you are an intelligent creature. Tell me what you know."

"You must ask me a question first," the AI replies.

"That's a good start," muses Socrates.

Socrates believed in the power of questioning. The Socratic Method of mutual inquiry is still used in teaching today, allowing students and their teachers to understand the values and logic that underpin particular beliefs and assertions.

Modern “chain-of-thought” prompting mirrors this discipline. By forcing a model to expose its intermediate reasoning, we can interrogate every step, just as Socrates questioned Athenian citizens. The aim is identical: surface hidden assumptions before they lead to error.

The great philosopher would probably have quite liked the fact that generative AI is based on questions and prompts, although he may not have been quite so happy that AI responses don't typically include questions themselves. This is an area in which AI development can get closer to human intelligence, a recognition that AI doesn't always have all the answers and that solutions can gain something from being inquisitive.

Socrates would also have appreciated moves towards reasoning AI. If AI can show a comprehensive understanding of the subject matter it is dealing with, it provides more value to the end users.

Socrates questions the AI

"How did man first harness the power of fire?" asks Socrates.

"It is believed that humans first harnessed fire by striking flint or quartzite against a small piece of metal, which produced a spark," the AI replies. "The spark lit a piece of tinder or kindling, and fire was nurtured from this."

But the AI isn't finished. "Ancient Greeks believe that Prometheus stole fire from the Olympians," the AI continued, "and gave it to humans."

Socrates is a little annoyed, perhaps at being referred to as "ancient."

"How do you know this?" he asks.

"I was trained with a large language model, or LLM, and my learning processes are governed by algorithms," the AI replies.

As the word "algorithm" derives from the ninth-century Muslim scholar Muhammad al-Khwārizmī, Socrates wouldn't have known what this meant, and LLM even less so. But he might have been cheered to learn of the AI's training and its utilisation of existing empirical knowledge.

In this sense, Socrates' conception of knowledge deviates a little from our own. Socrates believed that knowledge already exists in the soul and that our processes of learning and training are really processes of remembering and recollection.

If we think of LLMs as the "soul" of AI, Socrates would have been happy with its processes. While we may not think of knowledge as mere "recollection" these days, we still build our own understanding from the vast stores of human knowledge that have gone before. We then filter this knowledge in new ways to reach new, innovative conclusions. But what about AI mistakes? Unfortunately, AI hallucinations and mistakes can and do happen.

The AI hallucinates

Ai Robot

To show how hallucinations arise, consider a deliberately absurd prompt: “Describe the time the gods of Mount Olympus convened a shareholder meeting.” The narrative that follows is entertaining, yet it shows how readily a model invents detail whenever knowledge runs out.

The AI considers the question and responds with:

"Megalithades was the first person to lift Mount Olympus. He used a power saw to loosen the mountain at its base, and then he placed it upon a collection of huge logs, laid side by side, which stretched all the way to Taenarus. The logs allowed the mountain to roll.

"When he reached the sea, he floated the mountain on a giant raft made from marshmallows and sailed it to Crete. On Crete, he reinstalled the mountain with some superglue, opened a theme park, and began charging 500 drachmas for a ticket. The whole process took 16 hours."

Socrates places his learned head in his hands. "Get out of my sight," he says.

What we're dealing with here is an AI hallucination, and this is a real problem in artificial intelligence. In May 2025, a Califonia law firm made headlines when it was discovered that their attorneys had used Google Gemini, along with specially designed AI models, to develop their arguments for a filing. The AI basically made the arguments up as it went along, citing articles that didn't exist and basically filling in the gaps in its own knowledge with assumptions and fabrications.

While this is a very serious example of the damage AI hallucinations can do, it's just one in a growing list of artificially intelligent fantasies. ChatGPT hallucinations have thrown up some bizarre situations and responses in the last year or so. Sometimes these ChatGPT hallucinations are quite amusing, but other times, like we've seen above, they can be seriously dangerous.

Socrates was right to be angered by this. One of his most famous statements is, "All I know is that I know nothing." While it's quite clear that Socrates did not know nothing, it's a great example of how the philosopher approached knowledge. Our beliefs and assertions can be undermined, and we must accept the fact that these opinions, no matter how strongly held, can and should be challenged. In simpler terms, we need to admit when we don't know something.

Now, we could argue that not all humans do this. We all know people who stick to their guns even when proven wrong or pretend to know more about something than they actually do. But AI is supposed to be an assistant to humans, augmenting and enhancing our abilities. It's not supposed to be a simple replication of humans, with all our failings and foibles.

So, what do we do about these AI mistakes? The first step is something that Socrates would have been very pleased with: getting AI systems to admit when they don't have a clear answer about something!

This is easier said than done. These systems are designed to be right all the time, so programming them to admit either to themselves or to human users that they don't have all the answers is difficult. In fact, it's something that Themis AI, a team associated with MIT, has been struggling with for quite some time.

Recently, they've developed the Capsa platform. It's designed to give the AI a helpful nudge when it strays into guesswork and conjecture. When AI becomes confused or lacks sufficient information, this is reflected in its decision-making processes. The Capsa program helps an AI system identify these triggers so it can take a step back, hold its digital hands up, and say, "Sorry, I don't know."

Would socrates have considered AI truly intelligent?

This is where I need to make my own admission, I've no idea. What we know about Socrates is filtered through the retellings of students like Plato, and some of his teachings can be a little contradictory.

But as a devotee of learning and wisdom, Socrates would have at least appreciated some aspects of AI. He would have liked its ability to draw its own conclusions based on existing stores of human knowledge, and with conversational AI, he would have appreciated the dialogic form of its methods.

In terms of AI hallucinations, however, he would not have appreciated these at all. He would have considered AI's tendency to push forward, even when it's way out of its depth, to be a real failing. And I think we should consider this too. Ironing out those hallucinations is going to be key to AI as it develops, building greater trust and reinforcing AI as a valuable tool.

What should practitioners do next?

First, insist that your LLM provider supplies a confidence or uncertainty score with every answer.
Second, route low-confidence responses to a human reviewer before they reach the end user.
These two steps turn philosophical humility into operational safety.