Members of a coalition are like a ship's crew navigating rough waters. Everyone can see the horizon, but not everyone experiences the journey in the same way. The captain on the bridge has one view, the crew in the engine room has another. The navigator charting the course sees something else entirely.
Most coalition assessments ask the captain how things are going and call it truth. Too often, this produces a clean report that says everything is fine, even when half the crew is quietly planning their exit.
At the recent Collective Impact Action Summit, Resonance partnered with Chris Kirby from the Gates Foundation to share a diagnostic approach that does two things most assessments don't. First, it forces multi-stakeholder perspective into the analysis—not just what leadership thinks, but what country teams, implementing partners, and governance members each experience. Second, it breaks coalition effectiveness down into observable, concrete elements drawn from best practices across 22 multi-stakeholder initiatives worldwide.
The resultant framework moves coalitions from the vague sense that "something isn't quite working " or "there must be a better way" to a clear action plan of what they're actually good at and what specifically needs to change.
Most leaders can sense when things aren't working. Meetings feel unproductive. Partners seem disengaged. Decisions take forever. But sensing dysfunction and diagnosing it are entirely different things.
Traditional coalition assessments ask broad questions ("How would you rate collaboration?"), generate aggregate scores, and leave leaders with the same vague unease they started with—now with a number attached to it.
Here's the first way our diagnostic differs: it breaks coalition effectiveness into concrete, observable elements rather than abstract aspirations. We drew these patterns from across 22 successful multi-stakeholder initiatives to identify the actual mechanisms that distinguish functioning coalitions from struggling ones.
The framework assesses coalition health across five interconnected dimensions:
Each dimension breaks down further into specific subcategories, from theory of change clarity to exit strategy planning. This moves coalitions beyond aspirational language to concrete functioning they can assess and improve.
This is what shifts coalitions from "we need better collaboration" (vague, unactionable) to "we need documented decision-making protocols and quarterly governance reviews with field partners" (specific, action oriented).
Our diagnostic differs in a second way: it deliberately captures the different ways that stakeholders experience the coalition.
When you only ask governance body members how things are going, you get one version of reality. Add country teams and implementing partners, and more nuanced patterns emerge. High variance on a question is often dismissed as noise, but it is an important signal that different stakeholders experience the coalition differently.
The diagnostic captures these perspectives across dimensions like partner type, decision-making proximity, geographic position, coalition tenure, and functional role. Every coalition has its own relevant categories to consider—and at the Summit, participants emphasized the critical importance of including perspectives from coalition members with lived experience of the issues being addressed.
These viewpoints represent fundamentally different beliefs on how the coalition functions. A governance structure that looks inclusive from the top might feel inaccessible from the field. A MEL system that seems robust to strategists might feel like pure compliance burden to implementers.
Most assessments either ignore these gaps (by surveying only leadership) or average them away (by reporting aggregate scores). This diagnostic surfaces them as findings, and then forces coalitions to reckon with what those gaps mean for effectiveness.
When different stakeholder groups fundamentally disagree about whether governance is working, whose experience should drive the intervention? Most coalitions never ask that question because they never surface the disagreement in the first place.
The real value of any diagnostic is what happens after data collection. Too many coalitions stop at analysis, treating the assessment itself as the deliverable. Yet, completing a diagnostic changes nothing without prioritized responses to the findings.
Our diagnostic includes a nine-box prioritization matrix that maps current performance against foundational importance. This framework helps coalitions distinguish between four types of findings:
This requires practitioners to recognize which gaps matter most. MEL system weaknesses might be real, but if governance misalignment is actively undermining partner trust, you need to focus your energy rebuilding trust first. Both need attention eventually, but trying to fix everything at once guarantees nothing gets fixed. The matrix forces honest conversation about sequencing.
The diagnostic also surfaces whether perspective gaps between stakeholder groups should shape priorities. If different groups fundamentally disagree about effectiveness, whose experience drives the intervention? The answer matters, and while uncomfortable, it is something coalition leadership must discuss.
Two fundamental shifts separate this diagnostic from the coalition assessments gathering dust in shared drives:
It translates best practices into observable, measurable elements. Most coalitions know intuitively when something feels off but lack language to diagnose what specifically isn't working. This diagnostic gives them that language. Rather than asking "how's collaboration going?" it asks whether partners can describe decision-making pathways, whether MEL data actually informs strategy changes, whether role definitions are documented and referenced. These are concrete elements that distinguish effective coalitions from struggling ones—and they're actionable.
It treats different perspectives as essential data, not complications. When stakeholder groups experience the coalition differently, most assessments either ignore it (by only surveying leadership) or hide it (by reporting aggregate scores). This diagnostic surfaces those gaps deliberately because they reveal where coalition structures are failing. A governance process that feels efficient to headquarters but exclusionary to field partners isn't "working fine with some communication issues"—it's fundamentally not serving the people it's designed to serve.
Together, these shifts enable something rare: coalitions develop and execute an evidence-based action plan to right the ship and drive results.
The diagnostic also includes a prioritization framework that prevents the common trap of trying to fix everything at once. It maps current performance against foundational importance, helping coalitions answer: given limited capacity, what needs to change first?
This diagnostic bridges the gap between sensing dysfunction and fixing it. It gives coalitions language for what's working and what's not, grounded in best practices from initiatives that have achieved sustained collective impact. And it reveals how different stakeholders experience those structures, surfacing the gaps that single-perspective assessments miss entirely.
The result isn't a report that sits in a folder. It's an action plan: concrete elements that need strengthening, prioritized by what matters most, informed by the perspectives of everyone the coalition is supposed to serve.
Email info@resonanceglobal.com to take the diagnostic from your own perspective and see how it works.
We are happy to discuss your results and how to support your coalition.
Resonance is an award-winning sustainability and impact advisory firm specializing in partnership design, management, and measurement. We work with leading companies, nonprofits, governments, and philanthropies to deliver impact and advance sustainable prosperity for all.
Author: Monica Gadkari is a Senior MEL Specialist at Resonance Global, leading monitoring, evaluation, learning, and change management for complex partnerships. She specializes in translating research into scaled action and building data systems that enable evidence-based decision-making for programs reaching millions across Africa, Asia, and Latin America.