The Trust Gap Is AI's Next Big Problem
Stanford's 2026 AI Index has a number in it that's been bothering me since I read it: 10 percent. That's the share of Americans who say they're more excited than concerned about AI in their daily lives. The same report shows 56 percent of AI experts believe the technology will be broadly positive over the next 20 years.
Same technology. Wildly different read on where it's going.
The standard response to this kind of data is to treat it as a communications problem. If we just explain it better, people will come around. I've started to believe that's the wrong diagnosis entirely. The gap between expert optimism and public anxiety isn't confusion. It's a reasonable response to how AI is actually being deployed right now.
The Deployment Layer Is Where It Breaks
There are two very different ways to encounter AI in 2026.
The first is inside the development ecosystem — working with frontier models, watching them solve hard problems, seeing the pace of improvement. If this is your daily context, optimism is almost the obvious conclusion.
The second is as a customer, patient, or employee encountering these systems in production — the chatbot that can't escalate, the automated screening system with no visible logic, the recommendation feed that gets progressively worse. If this is your daily context, the anxiety is also almost the obvious conclusion.
Both groups are responding rationally to what they actually see. The gap isn't about one side being right. It's about the deployment layer sitting badly between what AI can do and what it's doing in most production contexts right now.
What I'm Actually Watching
I run technology for a dental services organization, which means AI adoption questions at my organization aren't abstract. The trust stakes in a clinical context are different.
What I've noticed is that the teams with real adoption — not just installation, but sustained use — share something in common. They didn't lead with capability. They started by being honest about what the tool couldn't do.
One of our best-performing internal deployments started with the PM walking a skeptical team through a live demo and deliberately triggering failure cases. Showing the edges before anyone asked about them. Adoption three months later was the best we'd seen on any AI rollout. That pattern has held up every time I've seen it applied.
The instinct in most enterprise AI rollouts is to sell the upside. Here's what it can do. Here's the productivity gain. And then people encounter the failure case a few weeks in and start wondering what else they weren't told.
The Overclaiming Tax
Trust in AI erodes most sharply not when the technology is wrong, but when people feel they weren't told it could be wrong.
The industry has been making withdrawals from that trust account for two years. Every vendor promise that didn't hold, every "it'll transform your workflow" pitch that delivered something more modest — those compound. We're starting to feel the balance.
The organizations I think will hold an advantage over the next two years aren't necessarily the ones with the most sophisticated models. They're the ones that set more honest expectations, showed their work, and built oversight that isn't theater. That sounds less exciting than transformation. But it compounds in a way that overpromising doesn't.
The 10 percent who are already excited about AI aren't the problem to solve. The question is what you're doing about the other 90.
Jared Mabry is SVP and CIO at D4C Dental Brands and co-founder of ClearPoint Logic. He writes about technology leadership and enterprise AI at jaredmabry.com. Subscribe to The Restless CIO for weekly insights.