Experts Give Reasons AI alone Can’t Win the Trust Game

Raheem Akingbolu

As artificial Intelligence (AI) continues to redefine industries globally, communication professionals are being forced to confront a critical question: how much of their work should be handed over to machines?

At the Knowledge Hub session organised by the Public Relations Consultants Association of Nigeria in Lagos, that question took centre stage as industry leaders examined the growing influence of AI on storytelling, reputation management, and media intelligence.

The consensus was clear that AI may be transforming the speed and scale of communication, but it cannot replace the human judgment required to build trust.

Leading the conversation, Chief Executive Officer of Burson Africa, Karl Hæchler, delivered a nuanced perspective on what he described as the “over-automation risk” in modern communications.

Speaking on the theme: ‘AI-Driven Strategic Communication: Mastering Algorithmic Disruption and Digital Volatility’, Hæchler argued that while artificial intelligence is now embedded in nearly every stage of communication—from research to content creation—its role must remain controlled and deliberate.

According to him, the future of effective communication lies in a hybrid model where AI supports, but does not dominate professional output.

He said: “AI should not be doing all the work. At best, it should account for about 30 per cent, while 70 per cent must still be driven by human thinking, judgment, and creativity.”

On the ‘Rise of AI in Communication Workflows’, he said that across industries, AI tools are increasingly being deployed to automate routine communication tasks. From drafting press releases to analysing audience sentiment and mapping stakeholders, “what once took days can now be completed in hours.”

Hæchler illustrated this shift with a case study from South Africa, where his team used AI to generate a comprehensive stakeholder map and crisis communication strategy within a significantly reduced timeframe.

‘The system was able to process large volumes of data, identify key actors, and produce tailored messaging frameworks for different audiences, including executives, regulators, employees, and investors.

“However, while the efficiency gains were undeniable, the limitations of AI quickly became evident.

“What AI gives you is speed and structure. But it does not give you judgment. It does not understand nuance the way humans do,” he said.

This distinction, he noted, is where many organisations risk getting it wrong—mistaking speed for accuracy and automation for insight.

On the ‘Authenticity Gap in AI-Generated Content’, he said one of the most pressing challenges identified during the session is what experts describe as the ‘authenticity gap’, “which they described as the growing disconnect between AI-generated content and human expectations of credibility.

As AI-generated text, images, and videos become more widespread, audiences are becoming increasingly sceptical of digital content.”

Hæchler warned that over-reliance on AI could further erode trust, particularly in high-stakes situations such as crisis communication.

“Everyone can tell when something is purely AI-generated. It lacks the human tone, the emotional intelligence, and the contextual awareness that people expect,” he said.

This is particularly critical in an era where misinformation, deepfakes, and manipulated narratives are spreading at unprecedented speed.

According to him, the more artificial the media environment becomes, the more valuable authenticity will be as a differentiator.

Analysing ‘AI, Misinformation, and the Speed of Crisis’, the session also highlighted the role of AI in amplifying the complexity of crisis management.

“With the rise of generative AI, false narratives can now be supported by fabricated images, videos, and even historical records, making it significantly harder for audiences to distinguish between fact and fiction,” he said.

Complementing this perspective, Chief Executive Officer of TechCabal, Tomiwa Aladekomo, examined how digital platforms and algorithms are accelerating the spread of such narratives.

Speaking on the sub-theme: ‘Rewriting the PR Playbook: AI-Driven Storytelling, Media Intelligence and Reputation Management’, Aladekomo noted that the lifecycle of a crisis has dramatically shortened.

“In the past, you had time to respond. Today, within hours, a narrative is formed—sometimes with fake evidence supporting it,” he said.

He explained that social media platforms, combined with AI-driven amplification, now shape public perception long before official statements are released.

This shift, he noted, has forced a transition from reactive communication to predictive strategy.

‘From Automation to Intelligence’, he said, beyond content creation, “AI is also reshaping how organisations gather and interpret information. Tools capable of analysing sentiment, tracking conversations, and identifying emerging risks are becoming central to communication strategy.”

However, both speakers stressed that the real value lies not in the data itself, but in how it is interpreted.

While acknowledging that entry-level roles, particularly in research and monitoring, are already being affected, Hæchler maintained that AI would ultimately augment rather than replace human capabilities.

“If you’re not using AI, you’re not even on the table,” he said, underscoring the growing importance of digital literacy. However, he stressed that the real competitive advantage would lie in how well professionals integrate AI into their workflows without losing critical thinking skills.

As the session concluded, one message stood out clearly: “In a world driven by algorithms and automation, trust remains the most valuable asset.” And trust, as both speakers emphasised, cannot be automated, adding: “It must be built; deliberately, consistently, and humanly.

Related Articles