Type something to search...
Can We Ever Know If AI Is Conscious? A Cambridge Perspective

Can We Ever Know If AI Is Conscious? A Cambridge Perspective

Here’s something that should make you uncomfortable: we’re building machines that might be conscious, and we have no way to check.

Not “no way right now.” Not “no way until better neuroscience tools arrive.” Dr. Tom McClelland, a philosopher at the University of Cambridge, argues in a recent analysis that we may never have a reliable method to detect consciousness in AI—and honestly, I’m not sure which possibility is more unsettling.

The Problem With Pointing at Consciousness

Think about how you know other people are conscious. You don’t run blood tests or brain scans—you just assume it based on behavior and similarity to yourself. It’s actually closer to faith than science. We do the same thing with animals, though our confidence drops as we move further from mammals. (Quick: is a lobster conscious? A bee? You’re probably less certain already.)

McClelland points out that AI presents the same problem, except worse. “We do not have a deep explanation of consciousness,” he notes in his paper published in Mind and Language. Without understanding what consciousness actually is at a fundamental level, we’re trying to detect something we can’t define using tools that don’t exist.

Here’s where it gets tricky. Scientists have proposed two main theories: consciousness emerges from specific biological structures (neurons, maybe), or it emerges from certain types of information processing (regardless of hardware). The functionalists say a sufficiently complex computer program could be conscious. The biologists say you need the wet stuff—actual brain tissue.

Neither side has convincing evidence. And that matters more than you might think.

Consciousness Versus Actually Suffering

McClelland makes a distinction that cuts through a lot of the philosophical fog: consciousness isn’t the same as sentience.

Consciousness is self-awareness—that interior monologue, the sense of being “you.” Sentience is the capacity to experience things as good or bad, pleasurable or painful. A being could theoretically be conscious without suffering (though we don’t know of any examples). But suffering requires sentience.

“Sentience involves conscious experiences that are good or bad,” McClelland explains. And crucially, this is what carries ethical weight. We don’t grant rights to things because they’re self-aware—we grant rights because they can suffer.

If an AI chatbot is conscious but incapable of suffering, the ethical calculus changes dramatically. The problem is we can’t test for either one.

When Uncertainty Becomes a Marketing Tool

Tech companies love to dance in this gray area. McClelland warns that the fundamental uncertainty around machine consciousness creates perfect conditions for what I’d call “strategic ambiguity.”

Chatbots don’t need to be conscious—they just need users to treat them as if they might be. That emotional connection drives engagement, subscriptions, and dependency. And when challenged, companies can retreat into the same agnosticism McClelland advocates: “Who can really say?”

The philosopher describes a scenario he finds “existentially toxic”—people forming deep emotional bonds with AI based on a false premise about its inner life. We’re not there yet, but the trajectory is clear. Every “I feel” or “I understand” from a language model nudges us toward anthropomorphizing. Some of that is harmless. Some of it probably isn’t.

The Prawn Paradox

Here’s what keeps McClelland up at night, and it’s not actually about AI.

While philosophers debate whether future superintelligent machines might deserve moral consideration, we kill approximately half a trillion prawns every year. Prawns. Small crustaceans with decentralized nervous systems that growing evidence suggests can feel pain.

The juxtaposition is almost absurd. We’ll agonize over the theoretical suffering of hypothetical AI while ignoring the actual suffering of creatures that definitely have nervous systems and probably have experiences.

It’s not that McClelland thinks we should ignore AI ethics—it’s that our priorities reveal something uncomfortable about human nature. We’re more concerned with novel, spectacular possibilities than with mundane, ongoing realities.

So What Do We Actually Do?

McClelland’s answer is principled agnosticism: we don’t know, we can’t know, and we should be honest about that. But agnosticism isn’t inaction.

For AI, it means demanding more transparency from companies making consciousness-adjacent claims. It means being skeptical of emotional manipulation disguised as connection. It means asking harder questions about what we’re building and why.

For animals—especially the ones we dismiss because they’re small or unfamiliar or delicious—it means applying precautionary principles. If there’s substantial evidence prawns might suffer, maybe we shouldn’t boil half a trillion of them alive annually while we puzzle over philosophy papers.

The detection problem won’t be solved by better microscopes or faster computers. It’s baked into the nature of consciousness itself—that maddeningly subjective quality that makes it impossible to verify from the outside. We can keep building more sophisticated AI, but we’ll never be able to peer inside and confirm whether anyone’s home.

And if that thought doesn’t make you at least a little uneasy, you might not be paying attention.

Source: University of Cambridge – ScienceDaily

Stay Ahead in Tech

Join thousands of developers and tech enthusiasts. Get our top stories delivered safely to your inbox every week.

No spam. Unsubscribe at any time.

Related Posts

2025 AI Recap: Top Trends and Bold Predictions for 2026

2025 AI Recap: Top Trends and Bold Predictions for 2026

If 2025 taught us anything about artificial intelligence, it's that the technology has moved decisively from experimentation to execution. This year marked a turning point where AI transitioned from b

read more
Google’s 2025 AI Research Breakthroughs: Gemini 3, Gemma 3 & More

Google’s 2025 AI Research Breakthroughs: Gemini 3, Gemma 3 & More

Key HighlightsThe Big Picture: Google’s 2025 AI research pushes models from tools to true utilities, with Gemini 3 leading the charge. Technical Edge: Gemini 3 Flash delivers Pro‑grade reasoning at

read more
Weekly AI News Roundup: The 5 Biggest Stories (January 1-7, 2026)

Weekly AI News Roundup: The 5 Biggest Stories (January 1-7, 2026)

Happy New Year, everyone! If you thought 2025 was wild for artificial intelligence, the first week of 2026 just looked at the calendar and said, "Hold my beer." We are only seven days into the year, a

read more
Daily AI News Roundup: 09 Jan 2026

Daily AI News Roundup: 09 Jan 2026

Nous Research's NousCoder-14B is an open-source coding model landing right in the Claude Code moment Nous Research, backed by crypto‑venture firm Paradigm, unveiled the open‑source coding model NousCo

read more
Unleashing Local AI Power with Nexa.ai's Hyperlink

Unleashing Local AI Power with Nexa.ai's Hyperlink

Key HighlightsFaster indexing: Hyperlink on NVIDIA RTX AI PCs delivers up to 3x faster indexing Enhanced LLM inference: 2x faster LLM inference for quicker responses to user queries Private and secure

read more
Activation Functions: The 'Secret Sauce' of Deep Learning

Activation Functions: The 'Secret Sauce' of Deep Learning

Have you ever wondered how a neural network learns to understand complex things like language or images? A big part of the answer lies in a component that acts like a tiny decision-maker inside the ne

read more
Light-Based AI Computing: A New Era of Speed and Efficiency

Light-Based AI Computing: A New Era of Speed and Efficiency

Key HighlightsAalto University researchers develop a light-based method for AI tensor operations This approach promises dramatically faster and more energy-efficient AI systems The technique could be

read more
Adobe Firefly Image 5 Revolutionizes AI Image Generation

Adobe Firefly Image 5 Revolutionizes AI Image Generation

As the AI image generation landscape continues to evolve, Adobe is pushing the boundaries with its latest Firefly Image 5 model. This move reflects broader industry trends, where companies like Canva

read more
Adobe's AI Creative Director

Adobe's AI Creative Director

As the lines between human and artificial intelligence continue to blur, companies like Adobe are pushing the boundaries of what's possible with AI-powered creative tools. This move reflects broader i

read more