Reexamining Evaluation
Changing nonprofit demands - Part 3
Image description: Close up of two people working together with laptops and a page of paper with complex notations on it. Photo by Scott Graham on Unsplash.
The emphasis on measurable outcomes and data collection has grown tremendously during my career. In my earlier years in the sector, making the case—telling the story about the need your organization was addressing and your solution—was important, but data was less critical. Now, funders are demanding much more proof of impact than in prior decades. The logic behind this trend is that funders and donors want to invest in effective programs that really achieve the changes they claim to. Nonprofits must be accountable and demonstrate their effectiveness, so that funding can be directed to where it will do the most good. Of course, nonprofit folks also want to do effective work, and good evaluation is formative as well as summative—it allows for learning and continuous program improvement.
A whole industry has grown up to help nonprofits develop evaluation systems, collect and analyze data, and report on their outcomes. Evaluation is difficult, time-consuming, and expensive if done well. Even beyond the effort and expense, evaluation can have additional challenges. It can require nonprofits to collect information from their clients that is invasive, time-consuming, and may discourage some from participating. Due to resource constraints, most nonprofits do evaluation poorly or incompletely and present results that are not truly proven. A few manage to get the funding to not only do their work, but also to evaluate it rigorously and establish evidence-based programs. With this gold seal of approval, they can replicate their program in new communities. These programs can be great, but they are also static, maintaining fidelity to the original approach and unable to adapt to changing community conditions.
Is formal evaluation worth it? Yes, but not always. It is important to strike the right balance between doing the work and evaluating it, and to do it in a way that does not damage your client relationships.
Some articles that I have read recently raise even deeper, more philosophical concerns about evaluation and measurement.
In The Tyranny of the Measurable, Mike Chitty outlines what is lost when we focus too heavily on measurable results. He comments, “When care is reduced to measurable output, it ceases to be care in the full moral sense. It becomes service provision. Task Completion. Risk mitigation. And something vital, something human, is amputated from the work.” He calls for a balanced approach that values storytelling along with data and refuses to reduce our work to things we can count.
In We Optimized Everything, And Made It Worse, Michelle Flores Vryn argues that “philanthropy has mistaken volume for value” and nonprofit programs are distorted as “programs get built around funder priorities instead of the highest needs. And success gets measured in dollars, not in lives changed or systems transformed.” She also addresses the problems with scaling (covered in prior posts), saying “Systems don’t scale. They can only transform. Choosing “scale” as your north star deprioritizes the dynamics that actually enable systems to work better, like relational cohesion, trust and belonging.”
I think that evaluation is most useful when it is used not to determine the worthiness of a particular program for funding, but to advance the field. We need research that helps us understand the most strategic ways to reduce hunger, encourage college completion, reduce violence, etc. It needs to be shared within relevant professional associations and used for learning and continuous improvement.
The history of GiveDirectly is instructive. GiveDirectly was started as sort of the anti-program program—an experiment to see whether just giving cash directly to poor people in Africa would have more impact than the programs that had been developed to serve them in a more structured way. If so, why not just provide resources and not put the energy into developing organizations, training staff to be helpers and case managers, and putting specific requirements on how any funds provided could be spent? The appeal is obvious: rather than a paternalistic approach to people in poverty, GiveDirectly assumes that poor people are capable of running their own lives. Rather than spending the bulk of aid on organizational infrastructure to deliver the aid, the funds would go directly to people in need. The good news is that this approach has been very successful, and there is a large body of research now confirming that direct cash payments change lives and lift people out of poverty. Please read about this type of work to better understand how the United States can benefit from Guaranteed Basic Income!
GiveDirectly is an interesting case study. It shows that a simple, non-bureaucratic approach to aid can be effective. At the same time, it has grown because of its investment in formal evaluation to prove the model. Ironically, they are playing the game, too. But most importantly, it drives home a key point: form must follow function, and we can’t let organization-building get in the way of mission achievement. For nonprofit leaders working to hold their organizations together under intensifying external pressures, this point is critical.


