The confidence gap: Why your people aren’t using AI even when they have access

Many organizations are further along with AI than they think.

They’ve rolled out tools. They’ve offered training. They’ve encouraged experimentation. And yet, when you look at how work actually gets done day to day, the impact often feels muted.

AI is available but inconsistently used. Momentum fades after the initial push.

The instinct is to add more training, more tools, more nudges.

But in many cases, the real issue is simpler.

The issue is confidence.

Access isn’t the same as readiness

Most employees can open a tool and generate an output. That’s no longer the hurdle.

What’s missing is confidence in when to use AI, how much to trust it and whether it’s safe to use openly in real work.

People hesitate. They worry about getting it wrong. They’re unsure how their manager will view AI-assisted work. They don’t know what “good use” actually looks like.

So they default to what feels safer: doing things the way they always have.

What stalled adoption really looks like

Low confidence shows up in subtle but consistent ways:

  • People use AI privately but avoid talking about it
  • Teams don’t share examples of AI-supported work
  • Managers don’t reference AI in feedback
  • Employees stick to low-risk uses instead of meaningful application
  • Good experimentation stays isolated instead of spreading

You won’t see this clearly in dashboards, but you’ll feel it in the culture.

Why more training doesn’t solve it

Education matters. But education alone does not build confidence.

Confidence is built through:

  • Seeing peers use AI well
  • Hearing leaders talk openly about their own use
  • Clear signals that thoughtful AI use is valued
  • Feedback that acknowledges judgment, not just output

Confidence is social. It grows through norms, not modules.

The role leaders play

Employees watch closely for cues:

  • Does my manager use AI themselves?
  • Do leaders reference it in meetings?
  • Are examples of good use recognized?
  • Is experimentation encouraged or quietly avoided?

Silence is rarely interpreted as neutrality. It’s interpreted as risk.

When leaders model thoughtful use and reinforce good judgment, AI stops feeling like a side experiment and starts feeling like real work.

Competence vs. confidence

Competence answers, “Can I use this tool?”

Confidence answers, “Should I use it here, and will this be seen as good work?”

Most organizations invest heavily in the first question and largely ignore the second.

What real AI confidence looks like

You start to hear:

  • “I used AI to explore options before deciding.”
  • “It helped me pressure-test my thinking.”
  • “I refined the output before sharing it.”

Managers respond with:

  • “Tell me how you used it.”
  • “What did you accept and what did you challenge?”
  • “Where did your judgment matter most?”

That’s when AI becomes embedded in how work happens.

Building confidence is a practice, not a program

You don’t need a new initiative to start building confidence. You need consistent signals over time:

  • Leaders sharing how they use AI in their own work
  • Managers inviting AI-supported thinking into discussions
  • Teams showcasing thoughtful use, not just clever prompts
  • Clear expectations about responsible use by role

None of this requires new technology. All of it requires intention.

The better question

Instead of asking:

“Have we given our people access to AI?”

Ask:

“Have we created an environment where people feel confident using AI well in real work?”

That shift moves the work from tools to behavior, from rollout to culture. And that’s where lasting progress happens.

At Talent in the Age of AI, we focus on the human side of readiness: confidence, judgment and the leadership behaviors that shape how work actually gets done. Because the future of work will not be determined by who has access to AI. It will be determined by who knows how to use it thoughtfully and consistently.