The Trust Gap Is AI's Next Big Problem

Stanford released its 2026 AI Index this week, and buried inside the data is a number that I keep coming back to: 10%.

AITrustGovernance

That is the percentage of Americans who say they are more excited than concerned about the growing use of AI in daily life. Meanwhile, 56% of AI experts surveyed said they believe AI will have a positive impact on the U.S. over the next 20 years.

Same technology. Very different read on where it is heading.

I do not think this is a communications problem. I think it is a governance problem, and it is going to become an execution problem for every enterprise leader trying to get real value out of AI investments over the next 18 months.

The Divergence Is Structural

The people building and funding AI systems see the technology through the lens of what it can do. They spend their days watching it solve hard problems, compress research cycles, and generate output that would have taken human teams weeks to produce. The optimism is earned.

The people encountering AI as customers, patients, employees, and citizens are seeing something different. They are watching their jobs get restructured around tools they were not consulted about. They are getting automated customer service interactions that feel worse than what came before. They are reading about AI-generated content that is unreliable, and they are being told it will get better without any clear accountability for when it does not.

That gap is not going to close because the technology improves. It is going to close, or not, based on how leaders deploy it.

What This Means Inside the Enterprise

When I talk to other CIOs about AI adoption, the framing is almost always internal. Productivity. Cost savings. Cycle time. These are real and worth pursuing. But the Stanford data is a useful reminder that AI deployment decisions have an internal audience that matters a lot: your own workforce.

Gartner's recent Digital Workplace Summit surfaced something they are calling "experience starvation," the phenomenon where AI handles so much of the routine work that junior employees never build the foundational skills they need to develop judgment. That is a talent pipeline issue disguised as a productivity win.

The 59% of workers who will need new skills in the next two years did not sign up for that disruption. How you handle the reskilling expectation and communication around it will determine whether your AI rollouts create engagement or resentment.

Three Things That Actually Build Trust

I do not think the answer is to slow down AI adoption. The competitive pressure is real, and the organizations moving deliberately but steadily are separating from the ones who are not moving at all. But speed without accountability is exactly what is feeding the skepticism.

Here is what I have seen work:

Transparency about what AI is doing and why. This sounds obvious, but most organizations are still communicating about AI at the strategy level while leaving employees to encounter it at the workflow level with no explanation. When people understand what the system is trying to do, its limitations, and what oversight exists, trust goes up. When they just notice the tools changing around them, it goes down.

Measurable human oversight. The word "guardrails" gets thrown around a lot, but what actually builds confidence is being able to point to specific decisions where a human reviewed or overrode an AI recommendation. Not just as a compliance artifact. As evidence that the system is not operating without meaningful accountability.

Honesty about what you do not know yet. The gap between AI insider confidence and public skepticism is partly a credibility gap created by overclaiming. When organizations position AI as a solution before it has demonstrated results, and then it underdelivers, the trust damage is larger than if they had set more conservative expectations from the start.

The CIO's Role Here Is Bigger Than IT

The KPMG research from this week frames it well: the CIO is no longer just the platform provider. They are, increasingly, the co-architect of how the organization earns the right to use AI at scale.

That means showing up to workforce conversations about AI with honest answers, not just technology demos. It means designing governance with the employee and customer experience in mind, not just the compliance checklist. And it means treating public trust in AI as a business asset worth protecting, not a PR problem to manage after the fact.

The 10% who are excited right now are not the people whose trust you need to earn. They are already sold. The question is what you are going to do about the other 90%.


Jared Mabry is SVP and CIO at D4C Dental Brands and co-founder of ClearPoint Logic. He writes about the intersection of technology leadership, enterprise AI, and operating model design.

Stay in the loop

A weekly newsletter on AI, software design, and technology leadership — plus curated reads worth your time. No spam, unsubscribe anytime.