Anita King’ori is a Development and Strategic Communication Professional in Nairobi, Kenya.
Artificial Intelligence is no longer a distant disruptor; it is here, and it is reshaping how communications teams work. A recent Muck Rack report (Axios, 2025) reveals that 75% of PR professionals now use AI, a sharp jump from just 28% in 2023. Most still work in traditional media relations, but many are shifting toward owned platforms where AI can accelerate content creation, analytics, and audience engagement.
In development communications, the promise of AI is undeniable. It can streamline reporting, analyze massive datasets, and scale storytelling in ways we’ve never experienced before. Imagine quickly translating campaign messages into multiple languages or identifying trends in real-time community feedback. How powerful?
However, the tension is that efficiency doesn’t equal equity.
The Ethical Crossroads
Leaders like Ana Patricia Muñoz (Financial Times, 2025) caution us: while AI can scale impact, if poorly managed, it risks deepening inequalities. Algorithms can mirror biases. Automated narratives can silence the very voices we aim to amplify. Communities already marginalized may find themselves even further excluded if we fail to embed ethics at the center of this new frontier.
This is why, for communicators in development, the principles of “Do no harm,” transparency, and locally-led solutions must remain our compass.
Why This Matters for Development Communicators
Development communications is not only about visibility but also about representation, trust, and dignity. We are not only selling products, but also shaping narratives that influence policy, behavior, and public perception. If AI-generated outputs distort reality, misrepresent communities, or strip away agency, the consequences go far beyond bad PR. They can erode trust, reinforce harmful stereotypes, and weaken accountability in the very systems we’re trying to strengthen.
As communicators, we hold both power and responsibility.
My Take on how to integrate AI in ways that are both impactful and ethical?
Community First, Always- AI should support local voices and not replace them. We use it to amplify what people are already saying rather than creating narratives on their behalf.
Consent in the Digital Age- Just as ethical storytelling demands informed consent for photos or interviews, AI-driven content should respect data consent. Communities deserve to know how their stories, images, or voices are being used.
Challenge Algorithmic Bias- Communicators must be vigilant. Who designed the AI? Whose data is it learning from? If the inputs are biased, the outputs will be too. We must question before we deploy.
Balance Speed with Accuracy- In an era where misinformation spreads faster than ever, AI can help us fact-check, and it can also generate errors at scale. Development communicators must value accuracy over speed.
Transparency Builds Trust- Be open about when and how you use AI. Communities and stakeholders are more likely to trust us when we don’t pretend machines are invisible contributors.
Looking Ahead
The future of development communications will not be AI-free. Nor should it be. But it must be AI-wise.
As communicators, we are not passive recipients of this technology; we are stewards of how it is applied in contexts that affect real lives.
If we lead with ethics, transparency, and inclusivity, AI can be more than a tool. It can be a partner in building equitable narratives, shaping fairer systems, and amplifying authentic voices.
But if we only chase speed and efficiency, we risk losing the very trust and integrity that give development communicators their power.
So here’s the challenge for all of us: let’s ensure that in the rush toward innovation, we don’t leave dignity behind.