SVP Vancouver’s Social Impact Coach Meagan Sutton empowers Investees to understand and measure the impact of their programs—find out how.
Non-profits and charities employ a variety of strategies, from hands-on program initiatives to funding efforts to policy change and advocacy work, in order to achieve their stated goal. Whether that goal is to eradicate child poverty or to improve the emotional wellness of school-aged youth, the social impact they are trying to make can only be realized if the strategy being used is effective.
Let’s imagine you’re jumping up and down to screw in a light bulb, one risky turn at a time. Standing on a chair would probably accomplish the task more effectively, right? But what if everyone around you is jumping too? What if you don’t realize the chair is an option?
I’ve been working with Investees for over seven months now and a big part of my role as SVP’s Social Impact Coach is to empower Investees with evaluation best practices. This is important for accountability purposes. It also fosters a culture of learning that enables Investees to better leverage their expertise and resources.
Here are eight lessons I instill in our Investees to help them evaluate their social impact more effectively.
1) Tracking the number of kids reached doesn’t tell us as much as we think it does
Counting outputs means to count the number of kids / schools / books / meals. Keeping track of these metrics is important for understanding an organization’s reach, but they do not tell us much about that organization’s social impact.
A poorly designed or implemented program that reaches a lot of kids could have very little (or even a negative) impact. For example, what if by jumping and twisting that light bulb, you’re actually damaging the light socket? That wouldn’t be captured by simply tracking how many light bulbs you’ve changed. What needs to be tracked are outcomes.
Outcomes tell us about the short, medium, and long-term change experienced by those we are trying to affect. I work with Investees to build on their output tracking by helping them define their intended outcomes and then showing them ways to measure those.
2) Comparing before and after is not enough
A lot of us are tempted to measure something before and after an intervention, such as tracking a child’s reading level before and after their completion of a program like One-to-One. We assume that any change can be attributed to the program, but this may not be the whole story.
We need to consider the other possible factors that could have affected these children during this time. By doing so, we can identify additional ways to understand the actual social impact of the program.
3) Individual stories of impact are illustrative, but can’t be generalized
It’s a common temptation of both funders and non-profits to be moved by individual stories of impact—and for good reason! However, one story cannot be generalized to an entire population.
That doesn’t mean there is no place for stories in evaluation. Techniques such as Most Significant Change or Outcomes Harvesting can be used to glean actionable insights from qualitative data which can complement more quantitative data. These techniques are also useful in collecting information about the unintended effects of a program.
I encourage Investees to use qualitative data, but to make sure they triangulate it with other types of data about a similar effect.
4) Know WHY you’re evaluating
This may seem like common sense, but it’s very easy to get into the habit of measuring the same stuff year in and year out, even if what you’re measuring is no longer relevant to your decision-making.
In survey design, for example, I often find myself asking Investees, “Why are you asking this question? What will do you when you get the answer?”
By bringing this critical lens to survey design, I’ve helped Investees cut survey lengths, improve clarity, and significantly increase survey response rates—recently, GrowingChefs! saw more than a 40% increase in survey response!
5) Get clear on your North Star, AKA the problem you are trying to solve
Our Investees are often focusing on several different but interrelated social problems and uncover even more social problems as they go along. This often makes it hard to step back and clearly articulate the main problem the organization is trying to solve, but doing so is essential.
Our Theory of Change work helps Investees do this by asking them to explicitly map the logical connections between what they do and the social impact the want to see. Oftentimes, this process uncovers assumptions that need to be validated through evaluation.
6) Know what you can reasonably measure
Imagine throwing a pebble into a pond. Once the pebble hits the water, ripples move outward. The pebble is the program intervention. The ripples are the outcomes—the social impact of that intervention. The smaller ripples closer to the centre are more controllable. As we extend outwards, the larger ripples become subject to other effects—rocks, wind, raindrops, sea monsters.
Using this analogy, I work with Investees to map their intended outcomes in order to understand which ones are most meaningful. At the scale our Investees are operating at, trying to measure how we change something like “making global food systems more sustainable” is difficult and costly, if not impossible. Instead, some of the main outcomes we tend to focus on are:
- Their reach and their productivity / outputs from programs
- Who they are reaching / their targeting
- Actual uptake by their target population / level of engagement
- Feedback data / user satisfaction and user-reported impact
7) Be ready for the results
Investees need to be ready to hear the response to what they are evaluating. If they uncover that their program is ineffective, they must be ready to take action rather than cherry-pick the positive data and carry on. And that could mean making serious changes.
I test Investees on this when we are designing evaluations by asking them questions such as, “What if the majority of parents say that the kids don’t read more books outside of the program? What will you do?”
Further, I remain heavily involved during the data analysis and interpretation process and facilitate conversations around any surprising results in order to help Investees recognize these as learning opportunities.
8) Evaluate, then scale (and then evaluate again!)
Depending on where a non-profit is in its lifecycle, it may be positioned to scale its social impact. However, it’s important to scale up something that is working well, and be ready to assess the quality of implementation as the organization scales.
Evaluation of a program before scaling, and then periodic monitoring during the scaling process, is essential to ensure that we are not losing out on the quality of programming.
Case and point, I am currently working with an Investee that is planning to scale the reach of its program to new schools. First, we are designing an evaluation to understand how well their program is working today. Once we know that, we’ll be able to explore scaling models and identify what parts of the program need to be improved before scaling can begin.
To learn more about social impact evaluation and the work that Meagan is doing with our Investees, check out her ongoing Social Impact Event Series.