Across many projects and programmes, consultation has become routine: meetings are held, comments are recorded, and reports are written. But how often does that input actually change decisions? In this article, Antoinette Bonita Kamau highlights real-world engagement scenarios where stakeholder voices were heard but not acted upon. She also introduces a framework that guides the examination of engagement and its outcomes.
Across Africa, we measure reach, attendance, consultation hours, and policy outputs. But we rarely measure whether engagement changes anything.
Over the past few weeks, I shared three field reflections.
In one, an irrigation canal was designed, funded, and presented to the community. During consultation, an elderly farmer pointed out that the canal crossed a grazing path. His comment was noted. Construction proceeded unchanged. Months later, that single design decision created daily friction between herders and farmers. The issue was not that consultation did not happen. It did. The issue was that the input did not alter the outcome.
In another case, a company held a well-organised town hall. During the Q&A, a young woman asked a specific question about training local youth for skilled jobs. The response referenced “long-term capacity building,” but no budget, timeline, or eligibility pathway was shared. Months later, frustration surfaced publicly. The organisation had consulted extensively. But there was no visible feedback loop connecting the question to a concrete action.
In the third case, a rural solar rollout initially followed its technical design despite early concerns from women about how tenants would access electricity when meters were placed inside compound walls. The concern was noted but not integrated. Conflicts emerged between landlords and tenants. When the project paused and the billing model was adjusted based on lived household realities, tensions reduced and cooperation improved.
Across all three situations, participation occurred.
People were invited. Meetings were held. Comments were recorded.
What differed was whether stakeholder input influenced design, implementation, or follow-up communication.

That pattern led me to develop the Listening Index™; a framework that examines engagement across five practical dimensions:
- Inclusion – Who is present, and who is absent?
- Voice – Can stakeholders speak openly and meaningfully?
- Power – Does input influence decisions?
- Responsiveness – Is there a visible explanation of what changed, what did not, and why?
- Institutional Learning – Are adjustments made over time based on engagement?
In the irrigation and town hall cases, inclusion and voice were present. Power-sharing and responsiveness were weak.
In the solar project, the turning point came when power and responsiveness increased. The design changed. The atmosphere shifted. Conflict reduced.
Over the past weeks, I have piloted this framework across three institutions to test whether listening can be assessed systematically rather than assumed. The early findings suggest a consistent trend:
Inclusion is common. Documented responsiveness and power-sharing are far less so.
Consultation can record voices.
Listening can alter outcomes.


