The Responsible AI Imperative

The Responsible AI Imperative: Why organisations must act now, or risk strategic irrelevance

The artificial intelligence (AI) revolution has well and truly moved past experimentation. It’s now a business-critical force shaping how organisations operate. Yet, through the Global Alliance’s international study of nearly 500 communication professionals, we uncovered a troubling pattern: AI adoption is soaring, but responsible governance is lagging dangerously behind.

This isn’t just a compliance issue. Organisations without robust AI governance frameworks face more than just regulatory headaches; they’re opening themselves up to reputational damage, missed commercial opportunities, and strategic disadvantage. Worse still, they risk losing stakeholder trust at a time when trust is everything.

 

The case for acting now

AI is already embedded in how many organisations function. Axios research reveals 91% of PR professionals use AI tools. Yet 39.4% of organisations have a responsible AI framework (IAPP AI Governance Report, 2024). The implications are serious.

Consider this:

  • Compliance risk: The EU AI Act will come into force in 2026, and penalties for non-compliance can reach up to 7% of global revenue. US states like California and Colorado are also rolling out their own regulatory frameworks (Gunderson Dettmer, 2024).
  • Reputation risk: We’ve already seen failures. McDonald’s AI drive-through went viral for the wrong reasons. New York City’s MyCity chatbot gave illegal advice. Air Canada’s AI promised bereavement fares it couldn’t deliver and courts held the airline accountable (CIO Magazine, 2024).
  • Competitive risk: Organisations with mature AI governance see returns of up to 10.3x ROI and outperform peers in revenue growth and efficiency (Agility at Scale, 2024; Berkeley Management Review, 2024).

The message is clear: Responsible AI isn’t just a technology issue; it’s a business one.

 

Governance is a communication challenge

Many organisations think implementing AI is a job for IT or legal. It’s not. It’s a job for communication professionals.

Why? Because stakeholder trust, transparency, and reputation live in the domain of strategic communication/PR/External Affairs. The Global Alliance research shows that what derails AI initiatives isn’t the tech. It’s the failure to communicate it responsibly, ethically, and clearly.

That’s why the Venice Pledge matters. The seven principles, Ethics First, Human-led Governance, Personal and Organisational Responsibility, Awareness, Openness and Transparency, Education and Professional Development, Active Global Voice, and Human-Centred AI for the Common Good, aren’t just guidelines. They’re a strategic framework. But like any strategy, implementation matters more than intention.

When putting together its ten principles for responsible AI, the Centre for Strategic Communication Excellence suggested three areas of focus that form a framework.

  1. Develop: Responsible use guidelines
  2. Maintain: Oversight and governance
  3. Provide: Training and development

Responsible AI is more than governance; it is about the systems, processes, policies and people.

 

Communication professionals are essential to success

Implementing responsible AI requires a communication mindset:

  • Stakeholder-first thinking: From employees to regulators, each audience needs tailored messaging. It’s not enough to ‘be transparent’, you need to communicate what matters, in a way that builds trust.
  • Education and change management: 74% of communication professionals already use generative AI (Comprend, 2024). A successful rollout means leaders and employees alike understand how and why AI is used and feel confident in its governance.
  • Crisis preparedness: Every AI system will fail at some point. How organisations respond will define their brand. Communication professionals already manage crises. Now we need to extend that expertise into the AI space.

Building capability through partnership

The most successful organisations treat responsible AI as a team sport. This includes IT, legal, compliance and, crucially, communication. Strategic communication professionals bring the experience to frame the conversation, build understanding, and maintain alignment with organisational values.

Communication professionals:

  • Establish governance structures that include executive communication input
  • Co-develop policies that are clear, human, and purpose-led
  • Design frameworks that support stakeholder trust through consistency and transparency
  • Coach leaders on how to explain and defend AI decisions with confidence

 

The business benefits are real

There’s a measurable upside to getting this right. Deloitte research shows that 74% of advanced AI initiatives meet or exceed expectations when governance is done properly. PwC’s responsible AI work highlights that ethics-driven organisations retain customers and market share. Moveworks data shows that properly governed AI can reduce content production time 10x while building, not breaking, trust.

 

Recommendations for leaders ready to act

Now is the time to:

  • Establish a governance committee with representation from or leadership from the communication team
  • Assess current readiness: Where are the risks, gaps, and opportunities?
  • Pilot AI initiatives underpinned by the Venice Pledge principles
  • Bring in communication advisors who understand stakeholder engagement and risk
  • Educate, communicate, and measure consistently and ethically

The future belongs to organisations that embed responsible AI as part of their culture, not a bolt-on to meet compliance. And that shift starts with strategic communication.

Adrian Cropley OAM, FRSA, IABC Fellow, SCMP is a global thought leader in strategic communication, an advisor to boards and executive teams, and co-founder of the Centre for Strategic Communication Excellence. He works with organisations to develop and implement responsible AI frameworks and build employees’ capabilities.