top of page

We Keep Asking What AI Can Do — But What Should It Stop Doing?

ree

There’s a question we’ve been asking a lot lately: “What can AI do?”


It’s a fair question. We’ve watched generative AI compose music, write code, design layouts, answer emails, analyze tone, recommend therapy scripts, and even mimic our voices. Every day, there’s another demo showing how far the boundaries can stretch.


But maybe it’s time we ask a different question: “What should AI stop doing?”


Because just because we can automate something doesn’t mean we should, especially when it touches the deeply human parts of our work and lives.


When helpful crosses into invasive


AI thrives on data. But at what point does helpfulness become surveillance?

Take an AI that “nudges” you when your calendar is too full, or when your tone in an email seems off. That might sound supportive, until it feels like someone’s constantly reading over your shoulder.


Now imagine that same system recommending who you should talk to less, or suggesting that your emotional state isn’t “optimal for collaboration.” Helpful?


Maybe. Creepy? Definitely possible.


The line between support and overreach isn’t just technical, it’s emotional.

We need to build systems that understand the difference between:


  • offering insight vs. enforcing behavior

  • surfacing patterns vs. drawing conclusions

  • suggesting help vs. assuming consent


Human agency matters more than AI accuracy


The smartest AI in the world can’t replace someone’s sense of autonomy.

An AI-generated to-do list might be efficient, but if it removes a person’s control over how they plan their day, it can backfire. What looks like “productivity” from the outside might feel like micromanagement from within.


People don’t just want tools that make decisions, they want tools that respect their right to decide.


That’s why meaningful AI design needs more than just good models. It needs boundaries. Opt-ins. Off switches. Transparency.


AI and consent: it’s not just legal, it’s emotional


We often frame consent in legal terms, checkbox agreements, privacy policies, terms of service. But in practice, consent is emotional. It’s about whether something feels okay. Whether you feel seen or watchedSupported or controlled.

AI systems should be designed with this emotional lens in mind.

Some questions worth asking during product development:


  • Would this feature still be useful if the user knew exactly how it worked?

  • Does the user want this kind of help, or are we assuming they do?

  • Can they easily say no, and mean it?


From frictionless to intentional


The trend in tech has long been toward frictionless experiences, less thinking, more speed, fewer clicks.

But when it comes to AI, we might need more friction, not less. Not in a way that burdens the user, but in a way that restores their power.


Friction can be thoughtful:


  • A moment of confirmation before an AI sends a message on your behalf

  • A simple question like “Do you want help with this?” before auto-completing a task

  • A gentle reminder: “You can always turn this off.”


These small moments invite the user back into the loop. They make the experience human-centered, not AI-centered.


So what should AI stop doing?


AI should stop assuming. Stop oversimplifying. Stop replacing things people find meaning in - like reflection, struggle, expression - just because they’re “inefficient.”

It should stop being invisible when it matters. Stop replacing consent with convenience.


Stop doing things for people without asking if they’d prefer to do it with you.


The goal isn’t to limit AI. It’s to ground it.

We’re building tools that shape how people think, work, and feel. That’s powerful. And with that power comes the responsibility to ask better questions.


Not just: How far can this go? But also: Where should this stop?


Because the future of AI isn’t just about innovation - it’s about intention.

Comments


bottom of page