Net Impact: The New Appeal of Metrics and Evaluation
Guest blogger Kelly McCarthy is?a Communications Manager and Research Analyst for the New Ventures Project at the World Resources Institute. Her current work focuses on developing impact metrics for the enterprise development community.
By Kelly McCarthy?
There was a lot of buzz about “impact” last weekend at the ?Net Impact Conference. However, this year it wasn’t just talk about creating impact, but most importantly how we consider, measure and prove it.? Perhaps the word was being used too liberally lately thus loosing a bit of its meaning.?
However, as I listened to many organizations whose work intends to generate positive environmental and social impact, it became apparent that a shift is occurring.? Rather than talking simply about impact in anecdotes and what was better than before, foundations, funds, design-for-impact, not-for-profit (and not-for-loss) organizations alike were talking about a “social capital market”, as Jason Saul, CEO of Mission Measurement, summed it up during one of the panels.?
Following are?some of the thoughts that came to mind from the perspective of metrics and evaluation while attending?some of the?sessions at the conference.
In a session titled Hype vs. Reality, panelists dug into the nitty-gritty of how we measure, monitor, and evaluate our work.? “Everyone does knowledge management and monitoring and evaluation poorly,” said Elizabeth Nitze, VP of Ashoka.? “After so much time we in the enterprise development sector are looking around wondering, what the heck happened?? What are the best-practices?? There are none.” There was a unanimous nod of heads from fellow panelists and audience members around the room.? However, in a sector that believes in the positive potential impacts of social entrepreneurs, there is light at the end of the tunnel.?
Indeed, the conversation turned optimistic as panelists Brian Milder (from Root Captial) and Elizabeth Wallace Elders (from globalislocal) joined Nitze in a discussion about the mash-up of innovative minds at Google.org, Salesforce, and Acumen Fund leading the effort to develop what is currently being called the Portfolio Data Management System (PDMS).? Officially announced at the Clinton Global Initiative, the PDMS is a web-based tool designed to track, share, and compare portfolio performance data with the ultimate intention of helping the enterprise development community better manage, communicate, and maximize our collective impact.
This is all well and good, but does it pass the “so what” test?? And will other efforts similar to the PDMS actually help improve how we talk about and demonstrate impact??
I thought I’d find an answer to these questions at a session titled Measuring Impact.? For 90 minutes, panelists explored trends and questioned commonly held models for “proving” the impact of our work.? Of great interest to me was a comparison of where the panelists thought we have been and where they thought we were going.? According to the panel, we are coming out of a time where impact was measured independently, there was no true sense of urgency, accountability was a major theme, and “better than before” was the measure of success.?
We are now moving into a time where we are starting to see success measured in the positive sense rather that just “what didn’t happen” – we are starting to shift away from the use of the term “non” to describe what we do.? In fact, the panelists spoke about the “business of impact”, referring to the fact that those that fund the work in this space are actually acting within a market that “sells and buys impact.”?
This shift was explained in more detail through three major trends:
- Rather than raising funds we are now asked to sell impacts
- Rather than measuring programs we are now asked to measure outcomes
- Rather than an obsession with proving impacts we are now asked to measure our contributions
In addition, the panel declared that fundraising as we have known it is dead, determining that we aren’t really asked to put out an RFP and more, we are asked to put out and RFR (Request for Results). “Simple” social and environmental responsibility can be just about a set of activities, just checking the boxes. But what happens because of those activities is what’s more important to a funder.?
Besides, we can’t be talking about “attribution” anymore because it’s next to impossible to prove.? I guess the business of impact has really become a business.
So what do we do?
For those sectors that have struggled to come up with effective metrics – know the value of their missions but just not how measure it in numbers – the panelists shared some words of wisdom.?
Before you embark on trying to build metrics for your work, understand the difference between program evaluation and performance measurement.? Ask yourself, “Are we trying to prove something or simply measure results?”? Most of us only truly need performance measurement, but many of us think we have to go through an entire program evaluation to get there.??
A program evaluation is a longitudinal study in order to test a hypothesis or some perhaps some theory of change.? Most likely, the reason our programs exist is because we have based our work on something that is shown to be effective at addressing a priority issue.? For example, “Providing access to bed nets for people in malaria hotspots is an effective means to reduce incidents of malaria thus improving the lives of those who receive them,” therefore, a hypothetical program may be designed to distribute bed nets to help it achieve its organizational goals of improving lives in, say, sub-Saharan Africa.? Our panelists argued that to try to prove this again – and the attribution of the hypothetical organization’s activities – was frankly a waste of money and “ridiculous”.
Performance measurement, on the other hand, may be based on the theory of change. It seeks to state outcomes, measure the effectiveness of your activities, and make it meaningful and relevant to your stakeholders.? This is where our panelists agreed we all should be focusing our efforts.
It is critical to ensure staff buy-in.? Panelists agreed that buy-in must come from the top down.? An organization won’t be effective at performance measurement if executive level management doesn’t see the importance to how it will drive the mission forward.? Although oftentimes it seems hard to justify spending grant money on performance management, it is becoming increasingly important to do so.
Finally, just start measuring something!? As with anything, it will be a process – metrics will grow and change over time.? According to our panelists, some rules of thumb include:?i. metrics need to be important to the success of your organization;?ii. as a place to start, look at three mission critical outcomes and build doable metrics that speak to these;?iii. consider causation and ask yourself, “what is a reasonable assumption?”? The intermediate outputs are what can be measured.?
Not to leave out those involved in making our lives easier – like the incredible efforts of the PDMS team – the panelists had some thoughts and challenges for them as well:
Semantics matter.? We will face an uphill battle to compare and effectively design metrics until we can come up with some standard definitions.? The good news is that many organizations around the world are currently trying to do just this.? A few that I personally know of are ISEAL Alliance for certifications organizations and the Rockefeller Impact Investing Collaborative (RIIC) through B-Lab.
Common metrics.? The panelists agreed that semantics were paramount pointing out a likely next step is to go on to define common metrics.? We in the enterprise development community can enjoy some more good news.? The Aspen Network for Development Entrepreneurs (ANDE) declared its commitment to developing core metrics for its members that can be integrated into the PDMS.? Already, Dalberg and others have made head-way with common social metrics, while our very own New Ventures is working with organizations such as E+Co to develop consensus-based environmental metrics.?
Attribution is ridiculous.? Our tools, metrics, and definitions should speak more to our contribution rather than how the world changed because of us.? Is it indeed as simple as Outcome – “What would have happened anyway” = Impact?? The challenge will be determining how to figure out the “what would have happened anyway” piece.
Verification is the next hurdle.? The panelists conceded that verification has never been easy, is largely absent, and can be very expensive.? They put out a challenge to get creative!? If we are indeed moving toward a social capital market – verification will be all the more important.?????
This is only a small fraction of the conversations that occurred around measuring our impact.? In retrospect I shouldn’t have been, but was truly astonished how often it came up throughout the weekend.? What really did it for me was when it became a central theme in the panel conversation with Causes on Facebook Director Matthew Mahan about effective ways to use web 2.0 tools to drive our missions forward.?
I walked away thinking, “Social Capital Markets, Facebook, Google…Figuring out how to measure and communicate impact has finally become sexy!”
- Impact Assessment