Agentic AI: Would you choose Auto-pilot or manual landing!

Navigating the Design Frontier of Agentic Experiences. UX-led AI is redefining how users engage with digital products, especially as agentic AI grows smarter and more “human-like.” As designers, we’re faced with the controversial challenge of whether to lean into the familiarity or disrupt expectations. This post explores four critical perspectives, offering practical examples for each and arguing for a more intentional, user-centric approach to AI agent UX.

Tony Tudor

7/29/20252 min read

1. Familiarity Breeds Expectation: The Naturalization of AI Agents

As AI agents evolve, they feel increasingly natural, and this new familiarity shapes user expectations around interaction patterns.

Think of the difference between using an ATM and chatting with an AI-powered appointment scheduler. Even when users know both are machines, the ATM is approached with transactional, mechanical intent; the appointment agent, on the other hand, is expected to be conversational, adaptive, and even a little empathetic.

Designers must recognize these shifting expectations. For instance, a voice-based travel booking AI should mimic the flow of natural dialogue, offering confirmations (“Did you mean June 5th or July 5th?”) and clarifications, not just rigid form-filling. Failing to meet these nuanced expectations risks cognitive friction and user drop-off.

2. Users Will Take the Shortcut—But Only If They Trust You

Users love shortcuts and will always pick the path of least resistance if they have confidence in the platform.

For example, imagine an AI-powered insurance claim agent. If the user doesn’t trust the agent to handle their request accurately, they’ll bypass automated flows and demand human support, even if it takes longer. Conversely, if the agent has built trust—through transparent feedback, clear error handling, and visible progress—they’ll take the “whole nine yards” automated journey.

But here’s the kicker: Users judge the experience, not just the outcome. A fast claim submission that feels confusing or opaque results in dissatisfaction, even if the claim is approved. Trust is the UX team's currency—invest in it with micro-interactions that reassure, surfaces that signal credibility, and recovery paths for edge cases.

3. Bespoke Experience Design vs. Agentic Autonomy: Who Holds the Reins?

Should we design every pixel of the AI experience, or let AI agents “design themselves” by adapting to context on the fly?

This tension is real in agentic AI. Picture a customer service chatbot: Should it strictly follow a prescribed flow, or should it be given flexibility to interpret intent and escalate beyond its original brief? Allowing the agent to break out of rigid task boundaries—say, noticing a frustrated tone and offering a callback—can delight users. But too much autonomy, and we risk unpredictable, inconsistent UX.

The practical play is to create robust design frameworks—think interaction models, guardrails, and personality matrices—that give AI agents room for adaptive problem-solving while staying on-brand and predictable.

4. AI Is Just the Engine—But Users Still See a Mind

No matter how much we emphasize “AI as a powerful processing capability,” users will interact with it as if it’s an intelligent being.

This perception gap is controversial: Should we push toward making AI “truly intelligent,” or should we manage expectations and slowly recalibrate user understanding? For now, most users anthropomorphize agents, expecting empathy and reasoning.

When a voice assistant says, “Sorry, I didn’t catch that,” users expect it to learn and adapt in the conversation. If it repeats the same error, trust erodes. Designers need to decide: Are we building the illusion of intelligence or guiding users to a healthier, more realistic relationship with agentic systems?

Conclusion: Designing for the Human, Not the Hype

As agentic AI blurs the line between tool and collaborator, the best UX is grounded in real human needs, not just technical possibility. We must question where to lean into natural interaction, when to establish trust, how much autonomy to grant, and, ultimately, how to manage expectations around the “intelligence” we put into the world. The controversy isn’t going away—but with a UX mindset, we can navigate it with empathy and clarity.