Heather Esper

Metrics from the Ground Up

I had the opportunity to attend the ANDE Metrics & Evaluation Conference: Metrics from the Ground Up this past week in DC with over 120 others. The well attended conference included many new faces, including fellow NextBillion writer Diana Hollmann. As an attendee of last year’s inaugural event co-sponsored with the Grassroots Business Fund (GBF), I was excited to see the progress of the metrics field over the past year, and interested to see if the conversation had shifted at all.

The Friday before I left I’d read a piece by Laura Freschi on AidWatch, which questioned why the mid-point evaluation by the Millennium Village Project gathered little feedback. In her article, Freschi pointed out that the evaluation was not useful, and stressed that MVP should have been independently evaluated and included more useful metrics. Others chimed in suggesting the format (a 100 page pdf) was an inappropriate method to gather feedback, and should have considered holding a public webinar or open debate. Other readers left comments asking if and how the results were being communicated to the MVP constituents, something that is asked of evaluations more and more these days.

Even though it’s often difficult to find and share the results with end beneficiaries, it is necessary; otherwise we may be contributing to the surveying exhaustion in the developing world. If interviewees aren’t shown how their responses to a survey contributed to some sort of action, they likely don’t understand the importance of their opinion and are less likely to contribute to the next survey (and we may soon end up the low response rates we see in the US). Freschi and her readers reconfirmed that more and more are criticizing evaluations, and unless you want to end up criticized it is important to understand the key elements of conducting a good evaluation. In other words, it is no longer good enough to just conduct an evaluation; what and how you measure impact are just as important. I left for the conference excited to discuss some of these issues with other attendees, such as“how do you convey you impact findings and the actions you took as a result of those to end beneficiaries?”…”what do you measure and how?”…”do you have a third party evaluate you impacts?”

One of the first sessions during the conference focused on giving an overview of IRIS, PULSE, and GIIRS and how they relate to one another, which I’ll attempt outline below for all of you readers who are still trying to sort them out. To put it simply, IRIS is a common framework with standardized metrics, PULSE is a data management tool to track metrics, and GIIRS is a rating system. The idea is that PULSE users can track IRIS and non-IRIS metrics, and GIIRS will use IRIS standards to inform their rating system. But in case you want more details, see short summaries of each below:

Impact Reporting and Investment Standards (IRIS) is a set of output based metric standards for use by fund managers and entrepreneurs. IRIS was developed with support from Rockefeller Foundation, Acumen Fund, B Corporation, PricewaterhouseCoopers and Deloitte. In 2009, IRIS became a program of the Global Impact Investing Network (GIIN). GIIN acts as a forum to identify and address issues in the impact investing industry. IRIS’ goal is to standardize the language and set a framework for tracking and reporting “impact” across four areas: profile, financial, operational and social and environmental impact information. The metrics and definitions used in IRIS were developed by working groups of sector experts, and approved by a committee of impact investing leaders. IRIS will develop benchmarks and industry-wide analyses based on performance data from anonymous organizations. Currently, the GIIN is working with the Financial Alliance for Sustainable Trade (FAST) and the Aspen Network of Developing Entrepreneurs (ANDE) to promote adoption of IRIS in order to collect benchmark metrics from its members.

PULSE, developed in 2006, is an internet based data management tool for fund managers and other intermediaries to track financial and non-financial performance metrics. PULSE is built off the SalesForce platform in collaboration with Google, with support from Skoll Foundation, W.K. Kellogg Foundation, SalesForce.com Foundation, Rockefeller Foundation and Lodestar Foundation. It is managed by Acumen Fund and App-X is commercializing it. Individuals can track IRIS and self-identified metrics using PULSE.

Global Impact Investing Rating System (GIIRS) provides independent third-party social and environmental impact ratings to be used by companies and funds. GIIRS was developed by B-Lab to certify B-corps and is similar to the Morningstar financial investment ratings, but instead evaluates a ventures non-financial performance. GIIRs will include ratings for both emerging and developed markets, along with industry specific performance metrics and aggregate ratings with benchmarking. They will also provide company and fund impact ratings. The ratings methodology GIIRS will employ was developed and governed by an independent Standards Board. They will conduct full, onsite audits of 10% of all organizations. The Grassroots Business Fund (GBF) has been selected as one of the few organizations to test the new rating system.

Other frameworks and rating systems also shared their work, including the Nexus for Impact Investing (NeXii). NeXii is a platform for electronic transactions and communications to list, trade, settle and clear private capital and environmental credit transactions, some at the conference compared it to a social stock exchange of sorts. The platform hopes to help raise capital for organizations through advocacy and marketing services, as well as provides tools and services to help funds better manage their portfolio.

Keystone highlighted their feedback surveys which includes five indices: efficiency, learning, net value, credibility, and satisfaction, and can be compared across peer organizations. GBF and Acumen Fund have used the survey to measure the satisfaction of their portfolio ventures of their organizations; additionally GBF has used it to measure the end constituent’s feelings of their portfolio ventures. McKinsey shared their Learning for Social Impact, which includes a database of over 150 tools and resources for assessing social impact called Tools and Resources for Assessing Social Impact (TRASI). TRASI is in its beta edition, and can sort the collection of tools and resources along 18 different categories.

Many good recommendations came out of the interactive break-out during the first day. Attendees broke into groups depending on the type of organization they were representing: funders, practitioners, or small and growing businesses, and we were able to brainstorm problems faced in collecting metrics, as well as potential solutions. We also discussed ways the other groups could help us better collect metrics. Many suggestions centered on better communication and closer interactions between the three groups. A great suggestion was for funders to include a portion of their investments for evaluation, in order to increase the quality of these.

I closed the conference on the second day by challenging the community to move from focusing on outputs to measuring outcomes with a presentation on key factors to consider when collecting outcome data at the project level, using examples from our impact assessment work with VisionSpring. I left the conference excited to stay abreast of the work the aforementioned organizations are undertaking, but also questioning why more organizations aren’t pushing to measure outcomes, and why the majority of attendees focused on what to measure, but not how to measure it. I look forward to attending next year and again assessing the progress of measuring metrics.

Categories
Uncategorized