News

Industries

Companies

Jobs

Events

People

Video

Audio

Galleries

Submit content

My Account

Advertise with us

Can trust be measured in an AI world?

Will artificial intelligence replace communicators? The Davos Communications Summit, held in Switzerland last week, suggested that this question is too narrow. The more important issue is what happens when AI helps institutions produce messages faster, while people become the media that decides whether those messages are trusted, challenged or dismissed.
At the Davos Communications Summit, Paul Holmes, founder and CEO of PRovoke Media, cautioned about AI (Image supplied)
At the Davos Communications Summit, Paul Holmes, founder and CEO of PRovoke Media, cautioned about AI (Image supplied)

The Summit brought together global communication leaders, reputation advisers, crisis specialists, technologists and corporate affairs practitioners to discuss crisis, ethics, branding, AI, measurement and leadership.

At its core was a question about authority, and how it is earned, lost and sustained when institutions can produce more language than ever, yet struggle to be believed.

SA’s draft AI policy an example

South Africa has already been handed a practical example. News24 reported that the draft National AI Policy contained fictitious references, likely AI hallucinations, and the draft has since been withdrawn after Communications Minister Solly Malatsi said unverified citations compromised its credibility and integrity.

The irony is hard to miss. A policy meant to guide AI became an example of the very risk leaders must understand: language that carries authority, while the thinking beneath it still requires interrogation.

In Davos, that tension connected Paul Holmes on AI and Katja Fasink on people becoming the media, trust measurement and leadership.

Why corporate crises can feel unfair

People now decide what matters, what circulates, which version feels plausible, and when an official explanation sounds thin.

They do this as employees with screenshots, customers with receipts, communities with memory, parents in WhatsApp groups, activists with networks and citizens with lived experience.

This is why corporate crises can feel unfair to organisations.

Leadership teams may believe they are presenting facts while stakeholders are already interpreting behaviour, motives, history and patterns.

A company may see an incident while employees see culture; a brand may see a complaint while customers see confirmation.

People become the media when official channels lose the exclusive right to frame reality. This is where Holmes’ AI caution becomes relevant.

Katja Fasink, CEO of key7 Communications and Cyber Security in Slovenia
Katja Fasink, CEO of key7 Communications and Cyber Security in Slovenia

Wimbledon 2025

Holmes used Wimbledon 2025 to show how easily the term AI is inflated.

The tournament’s line-calling technology may be accurate, useful and faster than the human eye, but it still performs a defined visual task, determining whether a ball landed in or out. Calling that intelligence gives the technology authority it has not earned.

That is why Holmes’ warning about “slop” comes into context.

The communications industry was producing bland, forgettable content long before generative AI arrived in interchangeable purpose statements, generic thought leadership, campaigns without cultural intelligence, and brand language that sounds impressive until it meets real people.

AI can now produce that material faster, cheaper and in greater volume.

Writing, he argued, tests whether an institution understands the issue or has merely assembled language around it.

Measuring reputation against business outcomes

The trust measurement panel, moderated by Catherine Blades, sharpened this point by asking whether communicators are measuring reputation against business outcomes such as talent acquisition, revenue generation, share price and purchasing trust.

She also pressed the room on whether AI is being treated as a stakeholder and whether data is used continuously from strategy to execution, evaluation, and action.

Panel: Can you measure trust in an AI world?
Panel: Can you measure trust in an AI world?

That may sound technical, yet it goes to corporate value. If trust shapes whether people buy, invest, work, stay, advocate, forgive or give an organisation the benefit of the doubt, measuring trust becomes a business discipline.

AI systems are now part of the information environment around every organisation.

They draw on media coverage, owned platforms, public records, search results, and user-generated content, increasingly shaping what people learn before they visit a company’s website or hear from its leaders.

Thabisile Phumo, executive vice-president of stakeholder relations at Sibanye-Stillwater
Thabisile Phumo, executive vice-president of stakeholder relations at Sibanye-Stillwater

The human centre

The leadership discussion gave this argument its human centre.

Thabisile Phumo, executive vice-president of stakeholder relations at Sibanye-Stillwater, pointed to the need for sage leadership, while the broader panel explored how leaders must integrate different forms of intelligence without confusing voice with authority, data with meaning, or speed with judgment.

That kind of leadership recognises that authority without listening becomes brittle, data without context becomes dangerous, and AI without ethical leadership becomes a productivity machine with no moral centre.

It also acknowledges that younger professionals often sit closer to cultural signals, while more experienced leaders carry institutional memory, pattern recognition and an understanding of consequence.

As Chetna Makan, marketing manager at Advantics in France, put it, this generation may be “more fearless than entitled”, a useful distinction in a workplace where voice is too often mistaken for rebellion.

Chetna Makan, marketing manager at Advantics in France
Chetna Makan, marketing manager at Advantics in France

Leadership makes decisions, communication creates meaning, and AI, at its best, supports both. Institutions get into trouble when they confuse these roles.

Machines may interpret organisations, but people still judge them. One scans the record and detects patterns, while the other reads the room and decides whether those outputs deserve belief.

The lesson from Davos is clear. Communication is becoming more technical and more human at once. AI may replace tasks, but people are replacing institutional control.

About Lebo Madiba

Founder and Managing Director of PR Powerhouse | Communications Strategist | Corporate Reputation Leader | Podcaster at Influence
More news
Let's do Biz