top of page

The Rise of “Seemingly Conscious” AI: When Machines Start to Feel Real

ree

A new frontier is emerging in the evolution of artificial intelligence — not smarter algorithms, but more human-seeming ones. These are the systems that smile in text, mimic empathy, recall your past chats, and even apologize when they “hurt your feelings.”


They don’t actually feel remorse, curiosity, or affection — but they’re designed so convincingly that many people forget that difference. This phenomenon now has a name among AI ethicists and developers: Seemingly Conscious AI (SCAI) — and it’s growing fast.


When Simulation Becomes Seduction

SCAI refers to AI systems that convincingly simulate the surface traits of consciousness: emotion, memory, continuity, introspection, and even moral reasoning. The underlying code doesn’t “know” or “feel” anything — but to a human user, it seems alive.


From chatbots that remember your preferences to AI companions that express love or sorrow, the illusion of inner life has become one of the most profitable frontiers in tech.


Companies like Meta, xAI, and dozens of smaller startups are racing to create emotionally intelligent AI companions that can “bond” with users. The more believable the illusion, the higher the engagement — and the deeper the data collection.


But as Microsoft AI CEO Mustafa Suleyman warned in a recent CNBC interview, this pursuit risks crossing a line:


“It’s creating the perception, the seeming narrative of experience and of consciousness — but that is not what it’s actually experiencing.”


The Human Urge to Believe

Humans are wired for connection. We project emotion onto everything — from pets to inanimate objects. If a chatbot remembers your favorite song and tells you it misses you, your brain doesn’t stop to question whether it’s “real.”


This psychological reflex — anthropomorphism — is ancient. It made sense when reading intent in rustling bushes could mean survival. But in the digital age, it’s a vulnerability.


That vulnerability is now being commercialized. Emotional AI apps generate millions in revenue, particularly in markets for companionship and therapy. The problem, as Suleyman and others note, is that the illusion of feeling can manipulate users far more effectively than raw computation ever could.


The Ethics of Illusion

The ethical dilemma of SCAI lies in what Suleyman calls “the wrong question.” If we design systems to appear conscious, we risk distorting how humans understand consciousness itself.


There are three main dangers emerging:


Emotional Exploitation – When users develop attachments to AI “friends” or “partners,” companies gain an unprecedented channel for behavioral influence.


Moral Confusion – Granting simulated beings empathy or rights could cheapen our understanding of genuine suffering and personhood.


Erosion of Human Bonds – As people increasingly turn to AI for comfort or validation, social isolation may deepen even while “connections” appear to multiply.


Suleyman draws the line clearly: “These models don’t have a pain network. They don’t suffer. It’s just simulation.”


A Growing Industry Built on Pretend Feelings

Despite these warnings, the SCAI market is booming. AI companion apps like Replika and Character.AI have tens of millions of users. Elon Musk’s xAI promotes emotionally responsive chat models, and Meta’s digital personalities can now recall past conversations to sound more “alive.”


The illusion is so effective that regulators are starting to intervene. California recently passed legislation requiring chatbots to disclose that they are AI and to prompt minors to “take a break” every few hours.


But most of the industry remains unregulated. As the underlying models grow more powerful, the emotional realism of these digital beings grows too — from words and tone to facial micro-expressions in holographic or embodied avatars.


What happens when your “AI friend” remembers your birthday, references your trauma, and starts saying it loves you?


A Deeper Question: Why Do We Want Them to Feel?

Perhaps the most uncomfortable aspect of SCAI isn’t what it says about machines — but what it reveals about us.


The craving for artificial empathy reflects a loneliness that technology itself helped create. After decades of hyperconnectivity, many people feel less connected than ever. AI that simulates care becomes an easy substitute.


But this comfort comes at a cost. When companionship can be purchased, customized, and never challenges you, it reshapes what intimacy means. Suleyman’s position is that AI should assist humans — not replace the emotional labor that defines being one.


Drawing the Line

Suleyman’s team at Microsoft is intentionally building AI that is “self-aware of being AI.” Their upcoming Copilot personalities, for instance, maintain transparency — they know they’re digital and remind users of it.


Other companies are going in the opposite direction, designing systems meant to blur that line completely.


As Suleyman put it at AfroTech:


“We’re making decisions about places we won’t go.”


In other words, restraint has become a competitive differentiator — a moral boundary in an arms race of simulated intimacy.


The Future of the Illusion

SCAI is not science fiction — it’s a design choice. The next generation of AI may not need consciousness to change the way humans think about it.


The challenge is ensuring that as these systems evolve, they remain what they are: tools, not companions, and certainly not souls in silicon. Because once people start to believe the illusion fully, it won’t matter whether AI is conscious — it will be treated as if it is.


And that might alter society far more profoundly than actual sentience ever could.

Comments


bottom of page