Opinion

Trust and risk resilience in the age of AI

Artificial intelligence – it’s exciting, it’s pushing boundaries and it’s changing the game. For organizations that use it, AI also ushers in a new dimension of reputational risk.

Reputational risk isn’t new, and my health-care communications practice has long grappled with Hollywood’s portrayal of “big pharma.” Add to this the growing expectations and scrutiny on how, or if, all businesses and leaders should take public positions on issues like global conflict, climate change and social inequality, and challenges abound.

Yet, for communications professionals and the organizations we counsel, AI takes risk to an entirely new stratosphere. For example, staying in my health-care lane, AI’s need for large amounts of patient data brings massive new data privacy and security concerns. Similarly, AI systems can make mistakes, such as diagnostic errors or incorrect drug development plans – especially if the data used to train AI systems is not representative of all patient populations. All these risks, among others, are exacerbated by overarching societal mistrust in AI.

Artificial Intelligence has a trust problem

At Proof Strategies, we’ve been studying trust in AI for six years. Our 2024 CanTrust Index reveals a steady decline in Canadians’ level of trust that AI will contribute positively to the economy, down to 33 per cent in 2024 from 39 per cent in 2018. Further, despite the hopes that AI will accelerate the cure for diseases like cancer, only 27 per cent of Canadians trust that it will be used competently in health care.

Against this backdrop, the integration of AI into virtually all aspects of our lives speeds forward. This becomes a reputational risk multiplier for any problem that can be linked to AI since there’s already so little trust in the bank. Deep distrust of big business doesn’t help.

(Mis)trust in big business

Year after year, our CanTrust Index study reveals that fewer than one third of Canadians trust large corporations, and only one quarter (26 per cent) trust their executives. Fewer than half of Canadians (48 per cent) trust their boss to be competent and effective and do the right thing, and on average, employees give their employers a very mediocre C grade on building trust with external stakeholders. In other words, if something goes wrong, most customers and employees are not ready to forgive and forget.

Change is occurring faster now than ever before, and yet, it will never move this slowly again.

Now that AI has been added to this cocktail of mistrust, what’s an organization to do? Start by understanding the factors that drive trust and apply them to the use of artificial intelligence.

Applying the science of trust building to AI

Despite what many assume, trust isn’t something that just happens on its own. Once understood, trust can be deliberately built, re-built and protected by nurturing its three ingredients: ability (competence), benevolence (kindness) and integrity (doing the right thing). Applying the ABI formula to AI, organizations should build trust as follows:

  • Ability: Demonstrate to your audiences the steps your organization is taking to competently use AI. This means showing that the organization not only understands its capabilities, but also its limitations, such as the ability to make moral or ethical judgments.
  • Benevolence: As our research reveals, trust in AI is low, likely due to fear that it might make mistakes or replace jobs. This means approaching the subject with kindness and empathy toward audiences. Use clear, transparent communications, thoroughly explaining measures to protect privacy and security and create exciting new jobs. Build in plenty of feedback loops that encourage audiences to share their concerns.
  • Integrity: Using AI can turbo-charge task completion, but organizations need to assure their audiences that they won’t cut ethical corners in the process. Consider developing a code of conduct for the use of AI that covers honesty, accountability for mistakes that could occur and safeguards like human oversight.

AI risk resilience

Applying ABI to AI will help create a solid foundation. But you must also prepare for the worst. A risk resilience process to safeguard from crises and issues includes building benchmark research, rapid response protocols, spokesperson training, listening tools powered by predictive analytics and trust recovery and rebuilding strategies.

Change is occurring faster now than ever before, and yet, it will never move this slowly again. Where AI takes us next is far from certain, and while it holds great promise for tomorrow, organizations that use it must make deliberate trust-building and risk mitigation a priority today.

Leave a Comment

Your email address will not be published. Required fields are marked *

Authors

Jennifer Zeifman

Contributor

Jennifer Zeifman is Senior Vice President, National Lead, Health & Wellness, at Proof Strategies

Republish this article

Republish this article on your website under the creative commons licence.

Learn more