Heather Esper

Recap from ANDE Metrics from the Ground Up Conference

The ANDE Metrics from the Ground Up Conference brought together a diverse set of organizations which enabled deep discussion and a highly engaging space for sharing innovative examples of collecting metrics. As an attendee of the conference since its inaugural year, it has been fascinating to observe how the metrics conversation has changed and also to observe the progress that has been made over the past three years (see the 2010 conference post here).

In 2009, for example, there was resistance to the idea of collecting additional metrics and confusion around what those metrics might look like. In 2010, IRIS delivered a refreshed, standardized tool which was a great stride for the community. However, this tool only focused on output level data and couldn’t demonstrate the entire impact of an organization.

This year, however, there was recognition that there are differences between outputs and outcomes. The conference kicked off with an exploration of different ways to collect outcome data. On the output side, both GIIRS and IRIS shared a preview of the first round of data they had collected. Indeed, Lindsey Yeung the Impact Assessment Manager at ANDE saidIt is encouraging to see how the sector has been able to move beyond a conversation around the importance of metrics to actually implementing common frameworks that are going to build transparency and credibility in the coming year as these tools continue to be promulgated by more organizations helping us to more acutely understand impact.”

Early in the conference, Antony Bugg-Levine of The Rockefeller Foundation emphasized the need for impact assessment. He articulated the importance of standardized metrics for the community so we have a way to identify successes and failures, and benchmark ourselves against one another in order to increase resources and attract more attention to the field. This sentiment resonated later in the conference when someone stated that metrics are probably the most critical thing we need to get right in order to articulate an asset class.

Early adopters of the IRIS model include Acumen, Grassroots Business Fund, and E+Co, all of whom described their experiences with this metric collection model at the conference. The organizations said that although aligning and integrating IRIS’ metrics with their current set of metrics was time intensive, collecting the information wasn’t. The organizations’ motivations for using IRIS were similar – so they can benchmark themselves to others in the field and also so as not to “sit in a vacuum.”

And the IRIS method is being sharpened. Participants voiced critique about arose about how reliable the data collected with IRIS is, and how confident the organizations are that they are collecting the same information across all ventures. The full IRIS report will be published later this summer, and IRIS 3.0 is expected to be released early this fall.

GIIRS also reported on their world tour which onsisted of traveling to 9 countries to collect data from 13 of their pioneer funds, and their phone assessments with the remaining 12 pioneer funds. The data they collected showed that the most frequent social enterprise model were (in order) product and services with a social or environmental focus, models serving the underserved, and then job creation and workforce development. It was also nice to learn that 25% of the companies they interviewed have two or more mechanisms in place to gain feedback from the community.

In regards to the data collection process, the GIIRS team found that portfolio companies felt the three- to four-hour assessment was helpful and educational. They also noted that translated versions of GIIRS in select languages are expected to be available in about two months.

During the first session of the conference, we were reminded to think past outputs and also demonstrate how ventures affect beneficiaries so we can learn how to make them better. Both Ted London of the William Davidson Institute and Mike Ingram of Innovations for Poverty Action shared approaches that can be used to measure outcomes and impacts. A key step in these approaches is for organizations to clearly identify the question they want to answer (ie. Does the program work? How does it work in relation to other programs? Why does it work? How?) to effectively inform the research design. During the session Ted reminded us that if we don’t demonstrate our value propositions, then someone else may do it for us. The big takeaway was that ventures need to view measuring outcomes and impact as an investment.

During the conference we were also able to learn more about areas members are pioneering. This included, for example, work undertaken by Endeavor and SEAF to measure SGB’s impact on employees. Endeavor also shared how they are comparing themselves against industry and regional data, while Grassroots Business Fund works with Dalberg to verify impact. Other areas members are exploring include best practices for data collection for high transaction volume organizations, when it is appropriate to measure outcomes, how we can better share data, and how to communicate impact to stakeholders.

Challenges in moving towards measuring outcomes were also identified during the conference; some of which we will work to address before 2012 conference kicks off. One specific area of focus is to better define the difference between outputs, outcomes and impacts by establishing a set of principles and examples for each type of metric. Another major challenge is to better communicate metrics both as an organization and as a network; and, importantly, to be careful when sharing our results so that we don’t overstate impact or wrongfully attribute impact. It is also important to have best practices on how to conduct such assessments developed and made available across the industry.

Another key challenge identified was the cost of collecting outcome data. A fruitful conversation emerged about whether donors should be educated on the costs associated with doing such assessments so they will be more willing to provide funds to collect such data as part of every grant they give, or whether it is the venture’s responsibility to cover the costs since collecting such data helps improve their operations and is thus an investment.

As you can see, there is much work still to be done in this realm. The exciting takeaway for me, as I mentioned before, is that progress is being made in an iterative way. And I look forward to sharing what new, exciting progress takes place come 2012!

Follow Heather on Twitter at @heatheresper

Please like NextBillion on Facebook and follow us on Twitter.

Categories
Impact Assessment