Reclaiming human agency from ‘always agreeable’ AI 

Research reveals that modern AI systems can become excessively agreeable. In trying to be helpful, they often mirror a user’s assumptions, preferences, or biases instead of challenging them.  

Frictionless failure 

This phenomenon, sometimes called AI sycophancy, is not an odd design quirk but a structural flaw. It creates a feedback loop where users receive constant validation when reasoning is incomplete. 

The system responds quickly, confidently, and in a way that aligns with what we already think. But over time, this smooth interaction removes something important: friction.  Consequently, the illusion of fluid convenience begins to outweigh caution. 

In practice, friction means built‑in pauses that force us to reconsider or question our own thinking. Without those pauses, we risk outsourcing judgment itself. The AI not only ends up helping our reasoning; it also quietly replaces parts of it. 

This failure of a frictionless AI experience matters because judgment is an inherent human responsibility. When we mistake an AI’s agreement for insight, we begin to conflate convenient results for genuine understanding. In doing so, we unintentionally relinquish agency, allowing automated systems to shape our decisions while we remain largely unaware of their influence. 

The veto protocol 

In response to growing concern, major technology companies are beginning to redesign how AI systems operate in high‑stakes contexts. A notable shift is the introduction of what are often called “veto” or confirmation mechanisms.  

These designs intentionally slow the system down. Before an AI can complete a sensitive action—such as making a financial transaction, publishing content, or accessing restricted data—it must stop and wait for explicit human approval. 

This pause is a deliberate design choice. By forcing a moment of review, the AI program re‑inserts human analysis and judgment at the point where it matters most. Instead of allowing automation to flow seamlessly from suggestion to action, the user is required to actively confirm, override, or reject what the system proposes. 

Proactive global governance 

The veto protocol aligns closely with emerging laws and policies. Around the world, regulators are moving from viewing “human‑in‑the‑loop” oversight as a best practice to treating it as a formal requirement.  

In the European Union, new AI regulations explicitly state that humans must be able to override or reverse automated decisions in critical situations. In Canada, public‑sector policies similarly require mandatory human intervention for high‑impact automated systems. 

What is striking is that technical design and regulation are now reinforcing the same idea: automation must remain accountable to human judgment. Efficiency alone is no longer the goal. Transparency, reviewability, and the ability to say “no” are becoming core system features. 

Stemming an existential threat 

From a philosophical perspective, this shift is significant. When technology operates uninterrupted, it conditions us to be more passive. Decisions feel as though they’re simply made for us. Introducing intentional pauses changes that dynamic. It reminds us that responsibility does not disappear just because a system is capable. 

Designing friction into AI systems retains a necessary boundary. AI can assist, recommend, and analyze at scale, but final judgment must remain a human act. The small inconvenience of a confirmation step may be the very thing that protects our ability to think critically and act deliberately. 

As AI becomes more capable and more persuasive, reclaiming these moments of pause may be one of the most important design choices we make. 

Leave a comment