Key takeaways:
- Trust in AI grows through experiences, not promises. When AI deepens learning and strengthens engagement, trust grows organically.
- The Power of “And” unlocks the real opportunity. Technology and humanity, data and empathy, efficiency and creativity — these combinations shift technology from perceived threat to trusted collaborator.
- Responsible AI + learning Science = measurable impact. Pearson’s evidence shows its AI tutors drive 4x higher engagement and meaningful gains in critical thinking when grounded in learning science and clear guardrails.
The trust challenge
AI is generating remarkable new capabilities, and with them, very real human reactions: curiosity, excitement, hesitation. Yet in my conversations with leaders, one question cuts through: Can we really trust AI?
The truth is simple: trust is earned when people experience AI working with them — improving outcomes in how they think, learn and create. Not just because they’re told to or by reading policy documents, but because AI proves (through use) that it can amplify human capability instead of replacing it.
This is where the Power of “And” comes in. The future won’t be built by technology alone. It will be shaped by technology and creativity, data and empathy, automation and human judgment.
In practice, this looks a lot less like delegation and more like collaboration. AI becomes a thought partner: challenging ideas, stretching imagination, and removing the friction that so often gets in the way of deeper thinking.
Where technology meets possibility
We’ve seen how powerful embracing these intersections can be at Pearson. When AI is grounded in learning science and shaped by responsible guardrails, something remarkable happens: AI doesn’t bypass learning — it deepens it. Our data shows that students using AI‑enabled tutors are:
- Are 4x more likely to be engaged, and
- generate 20% of their questions that demonstrate higher‑order critical thinking.
These aren’t abstract metrics. They show how people actually experience learning differently, reflecting real shifts in how AI can support the outcomes that matter most.

Responsible AI + learning science = measurable impact
Outcomes > assurances
People don’t start to develop trust by reading a Responsible AI statement. Trust forms when:
- the AI tutor helps them understand a concept they struggled with;
- the learning pathway adapts to their pace and makes them feel seen;
- the system treats their data with transparent, ethical care;
- the experience feels human‑centered, not machine‑centered.
Trust becomes felt, not theorized.
And when people feel that safety and empowerment, fear gives way to engagement at scale.
A shared, global moment
Organizations everywhere are wrestling with the same challenge: How do we deploy AI in ways that elevate human potential rather than erode it?
That’s why Pearson is working alongside partners including AWS, Google and Microsoft to build global upskilling initiatives that combine the best of technology and the best of learning science. Because the real breakthrough doesn't come from deploying technology faster, it comes from helping people learn to work with it.
The winning formula for the AI era is now clear:
Responsible AI + Learning Science = Measurable Human Empowerment
Where we go next
If the last decade focused on digitizing content, the next will focus on human‑centered augmentation — expanding creativity, capability, and confidence at scale.
This approach is already shaping how we operate across Pearson’s global workforce.
But getting there means choosing balance:
- intelligent systems and human learning
- machine speed and human agency
- AI capability and human purpose
Trust won’t be built by what AI promises. It will be built by what people experience when technology becomes a true partner in learning.
Close the learning gap with us. Read the report
----------

Ginny Ziegler is Pearson’s Chief Marketing Officer
