AI as a Mirror of Intrusive Thoughts: What Cognitive Behavioural Therapy Can Teach Us About Prompting
- Noemi Kaminski
- Sep 15
- 3 min read

Intrusive thoughts are strange things.
You could be in the middle of brushing your teeth or walking to the bus, and suddenly your mind offers up:
What if I drop everything and scream right now?
What if I’ve forgotten something important?
What if I’m not good enough?
They appear out of nowhere, feel urgent, and yet… they’re not the truth. They’re mental “false positives.”
Now look at artificial intelligence. Ask it a vague question, and it sometimes produces something equally unhelpful: overly confident, oddly specific, but ultimately detached from reality. In psychology, we’d call these intrusive thoughts. In tech, we call them hallucinations.
Both the brain and AI share the same problem: when context is missing, the system fills the void with noise.
AI Hallucinations as Intrusive Thoughts
AI models do something eerily similar. When given vague or poorly structured prompts, they “hallucinate”, producing answers that look confident but don’t actually connect to reality.
For example:
Ask a model, “Who was the first president on Mars?”, and it might spin an elaborate—but entirely fictional—story.
Ask, “What should I do with my life?”, and it might deliver a generic motivational speech that sounds polished but doesn’t actually apply to your situation.
Just like intrusive thoughts, these responses aren’t malicious. They’re the byproduct of incomplete inputs.
The AI doesn’t know the context.
And without context, both the brain and AI will try to fill the void with whatever scraps they can grab.
CBT and Context Engineering: Same Core Principle
This is where the overlap between mental health and AI prompting gets fascinating.
In CBT, therapists don’t just tell patients to “stop thinking negatively.” Instead, they use structured frameworks to reframe thoughts:
Identifying distortions.
Replacing vague worries with specific, testable statements.
Creating healthier thought loops by setting context for the brain.
In AI, we call this context engineering. Instead of throwing vague prompts at a model, you:
Define the role (who is answering).
Clarify the task (what’s the scope).
Anchor with constraints (what should not be included).
Layer examples and references (what success looks like).
In both therapy and AI, the structure is what keeps the output useful.
Mental Hygiene = Context Hygiene
Here’s the deeper insight: context is mental hygiene.
In your brain, unchecked intrusive thoughts can spiral into anxiety loops.
In AI, unchecked prompts can spiral into irrelevant or misleading outputs.
But with clear context—whether that’s reframing your thoughts with CBT or layering context in your prompts—you create boundaries that stop chaos before it starts.
It’s not about silencing thoughts (or silencing AI). It’s about guiding the system.
Practical Example: The CBT → Prompting Parallel
Intrusive Thought: “I’m going to fail this presentation.”
CBT Reframe: “What evidence do I have that I’ll fail? What preparation can I rely on?”
AI Parallel: Instead of prompting “Write my speech”, you’d prompt “Write a 3-minute motivational speech for a college audience about overcoming setbacks, using two relatable stories and keeping the tone casual.”
Notice the similarity? In both cases, you’re building context around the raw input so the “output” (whether from your mind or from AI) has a better chance of being aligned, useful, and healthy.
Why This Matters
We tend to treat AI hallucinations as purely technical problems and intrusive thoughts as purely psychological ones. But in both cases, the underlying truth is the same: outputs are shaped by inputs.
If your inputs are vague, distorted, or incomplete, don’t be surprised when the outputs are messy.
The good news? With practice, we can get better at this—both in our mental lives and in our AI workflows.
Takeaway: Context Is Mental Hygiene
Next time you find yourself wrestling with intrusive thoughts, or retrying prompts endlessly, pause and ask:
What context is missing here?
How can I frame this more clearly?
What would make this output more aligned with reality?
Because whether it’s your own thoughts or the response of an AI, context is the invisible architecture that keeps chaos at bay.
Context engineering isn’t just about tech. It’s about mental health. It’s mental hygiene for the digital age.



Comments