The Authenticity Gap

Why customers keep asking for humans when the AI works fine.

6 minute read

I spent nearly two hours on a claims call with my insurance company last month. Not because the claim was complicated, but because the bot couldn’t understand the situation. Every attempt to explain what happened got routed back to the same scripted questions. By the time I reached a human, I was exhausted and frustrated before the actual conversation even started.

A week later, my security system started triggering random false alarms. The company has removed their 800 number entirely. So I’m texting with a bot about why my alarm keeps going off at 2am, and there’s zero recognition that this might be stressful. Just troubleshooting scripts.

I build products for a living. I understand the economics of AI in customer service. I still wanted to talk to a human.

The Efficiency Paradox

The metrics on these systems probably look great. Response times down. Resolution rates up. Cost per interaction dropping. I’ve seen this pattern before first hand: a team celebrating engagement numbers while nobody checked if engaged users were satisfied users.

Gartner’s 2024 research found that 64% of customers would prefer companies not use AI in customer service. That number didn’t surprise me. These aren’t technophobes. I use AI tools constantly. But when something matters–when I’m stressed, when I need someone to understand context, or when the situation doesn’t fit the script, that’s when I want a person.

The disconnect isn’t about capability. It’s about what service actually means to people. It’s another version of the gap between internal quality and external value that shows up everywhere in product work.

 
When someone calls about a billing error, they want resolution. When someone calls about a billing error after their card got declined in front of their kids at the grocery store, they want to be heard. Same problem, different need.

What Authenticity Actually Means

Thinking back on those experiences, the pattern seems to be about recognition rather than resolution.

With the insurance call, I didn’t just want my claim processed. I wanted someone to understand that my roof had been damaged in a storm and that we had heavy snow in the forecast within a week. I needed someone to understand that this was one more thing on an already overwhelming week. The bot couldn’t hold that context. The human could.

With the security system, I didn’t just want troubleshooting steps. I wanted someone to acknowledge that random 2am alarms are genuinely unsettling and that I wasn’t sleeping well because of it. A bot can walk me through resetting sensors. It can’t actually care whether my week gets better.

Research from Medallia found that 61% of consumers are willing to pay more for personalized experiences. But “personalized” doesn’t mean “the algorithm knows my purchase history.” It means “someone sees me as a person, not a ticket.”

The authenticity gap isn’t about information or efficiency. It’s about the difference between being processed and being understood.

Where This Gets Complicated

Here’s the uncomfortable part: I’ve been on the other side of this decision.

I’ve advocated (and continue to) for AI in customer service. I’ve seen the math on cost per interaction. I know that the most effective AI implementations work invisibly, handling volume that humans simply can’t, resolving straightforward issues faster, and freeing human agents for situations that actually need judgment.

The question I am poking at now is how we better design the experience at the handoff. When does efficiency serve the customer and when does it leave them feeling like they don’t matter?

Some patterns I’ve observed that seem to work:

Making the human option visible matters more than most teams realize. Hiding “talk to a person” behind layers of AI interaction tells customers you’d rather not deal with them. Even if most won’t use it, knowing it’s there changes the experience.

Equally important is letting the AI know its limits. The worst experiences happen when AI tries to handle situations beyond its capability. A chatbot that recognizes “this person is frustrated and needs a human” creates better outcomes than one that keeps trying to resolve algorithmically.

And finally, designing for emotional context rather than just problem type changes everything. The same question can require different handling depending on what’s behind it. “Where’s my order?” from someone tracking a gift versus someone whose medication didn’t arrive are not the same inquiry.

 
Gartner predicts the EU will mandate a “right to talk to a human” in customer service by 2028. Whether or not regulation is the answer, the fact that analysts expect it signals something real about customer expectations.

The Trust Foundation

What I keep coming back to is trust. Edelman’s research on brand trust found that 81% of consumers say trusting a brand to “do what is right” is a deciding factor in purchase decisions. That tracks with my experience. Trust isn’t built through efficient problem resolution alone. It’s built through moments where someone demonstrates they actually care about your outcome.

AI can be trustworthy in the sense of being reliable and accurate. It can’t (yet) be trustworthy in the sense of having your interests at heart, especially when those interests are subjective or at conflict with the company’s. That distinction matters more than we sometimes acknowledge when designing service experiences.

The companies getting this right aren’t choosing between AI efficiency and human connection. They’re using AI to handle what AI handles well and preserving human touchpoints for moments where authenticity matters.

Artificial Empathy vs. Authenticity

As AI gets better at simulating empathy, does the authenticity gap close or widen?

My instinct says it closes, but it’ll take time for the trust to build. The better AI gets at performing care, the more jarring it becomes when you realize there’s no one actually there. Maybe the next generation, raised with AI assistants, will feel differently about what “authentic” even means.

What I do know is how I felt after those two hours with the insurance bot. Exhausted. Dismissed. Like my time and stress didn’t matter to anyone. The human who eventually helped me resolved the claim in fifteen minutes. But more than that, she acknowledged the frustration. She apologized for the runaround. She made me feel like a person dealing with a problem, not a ticket being processed.

That’s not a problem to optimize away. It’s information about what actually matters—and a reminder that the metrics we celebrate might be measuring the wrong things entirely.