Guest Articles

Friday
January 1
2016

Heather Esper / Yaquta Fatehi

The State of Assessment: Why the ‘assess or not’ debate continues and how we can move beyond it (Part 1 of 2)

NEXTBILLION’S BEST OF 2015 CONTEST

Editor’s note: As part of our Most Influential Post of 2015 contest, we are re-publishing the articles that made you think, made you act, or maybe even made your day. This article was the most-viewed post on NextBillion for October 2015. To see the full list of the most popular posts in 2015 and to vote for your favorite, click here, or simply scroll down to cast a ballot. 

 

This post is part of a two-part series. Today’s post focuses on the ongoing assessment debate and explores the evolution of assessment to better understand why this debate continues. In tomorrow’s post, we will begin with the current state of assessment and share our stance on the debate and we will wrap up with five suggested activities to accelerate value creation through assessment.

Recent debate has focused on whether businesses should assess social, economic and/or environmental impact.

A case in point: Erik Simanis, Head of Frontier Markets Initiative at Cornell University, argued in 2014 that impact assessments are good for nonprofits, but bad for businesses that want to profitably serve the poor with products and services. His argument lists internal reasons such as management perception and high implementation costs, as well as external reasons such as burdening and transferring costs to customers. Shortly after, Irit Tamir and Mara Bolis of Oxfam America responded by stating Simanis’ stance would encourage and contribute to business negligence. They argued that profit should not get all the attention and that social returns matter. They viewed assessing impact as a means toward managing risk as Base of the Pyramid-focused ventures can have unintended negative consequences on stakeholders, especially when their product value chains are considered. These assessments also help identify new opportunities through customer feedback. In our view, Simanis is concentrating on a type of assessment – an evaluation – since it appears he is focusing on causality. But as with all impact assessment discussions it’s important first to define terms. We use the following definitions: assessment means using data, either qualitative or quantitative, to gain a better understanding of an issue such as an organization’s performance or its consumers’ well-being; while evaluation means establishing causality to judge an issue. Thus evaluation is a type of assessment. And to go further, measurement implies numerical data.

This is not the first time we’ve heard this debate; it’s just the latest entry in a global conversation on why and how to assess impact that occurs at conferences, stakeholder meetings and CEO-led coalitions as well as in blogs, journals and classrooms. To better understand why this debate continues and how we can move beyond it, it is helpful to look at the evolution of assessment.

 

Context: Evolution of the assessment debate

Approaches to impact assessment have been implemented in the international development community since the 1950s with logframes being a popular tool. (The logframe links an organization’s inputs and activities to resulting outputs, outcomes and impacts.) In the 1980s, leading development theorists such as Robert Chambers advocated for the inclusion of program stakeholders in the design, analysis and reporting of impact evaluations. In the 1990s, assessment continued to gain a wider following due to the increased need for accountability by NGOs. Alnoor Ebrahim, Associate Professor of Business Administration at Harvard University, wrote (abstract) that this trend was due to scandals that deflated public and stakeholder trust in the 1980s and 1990s and the burgeoning global NGO sector. Ebrahim wrote that because accountability could be viewed through multiple lenses, several tools could be used for assessing it. These lenses included: internal (accountability to staff), upward (accountability to donors and governments), downward (accountability to patrons and communities) and functional (accounting for resources) and strategic (accounting for impacts). In his review, Ebrahim also recognized the challenges in assessment, namely the mismatch between donor and NGO goals. And while he found assessments were increasingly used for upward accountability, he also identified significant potential to use them for downward accountability and as a learning tool to develop new innovative models.

Meanwhile, the social entrepreneurship field was beginning to receive further notice. Greg Dees, one of the pioneers of the social entrepreneurship field, made the case  for integrating accountability into the definition of the field itself. Social entrepreneurs must assess “their progress in terms of social, financial and managerial outcomes, not simply in terms of their size, outputs or processes,” he wrote in 1998. This definition calls on entrepreneurs to take steps to measure the needs and value created for the people they serve. He added that social entrepreneurs must assess “their progress in terms of social, financial, and managerial outcomes, not simply in terms of their size, outputs, or processes” to meet the needs of their investors. (However, many impact investors were and remain unwilling to bear the costs of impact assessment.) Dees and co-author Beth Battle Anderson recognized that while it is difficult to measure and attribute impact, social entrepreneurs should conduct internal measurement and external evaluations. They even recommended the use of indirect or leading indicators to break down impact goals into “specific, measurable process and outcome objectives.” Around this time, the Millennium Development Goals also helped to spur a big push to measure results.

Moving deeper into the debate, the conversation evolved from the need for assessment into the different metrics available and their varying purposes. Sheila Bonini and Jed Emerson, both at the Hewlett Foundation when they wrote their piece, pushed all organizations to concentrate on what data to track, how to track it cost effectively, and how to integrate it into decision making. Similarly, other groups focused on measurement integration and standardization; for example, the Rockefeller Impact Investing Collaborative found the “lack of clear, consistent and credible impact” created a barrier to reaching scale. An example of standardization of metrics is the Impact Reporting and Investment Standards (IRIS). Managed by the Global Impact Investing Network as a catalogue of output metrics, IRIS aims to increase capital flow and allow investors to compare social, environmental and financial performance of their investees. From there, the conversation moved into the relationship between business and poverty reduction and assessment’s role to improve the work of grantors (foundations and agencies), social enterprises, NGOs and impact investors, through mutual value creation.

So how then did the debate begin to yo-yo between “to assess or not to assess”? We and others like Howard White of 3ie believe (paywall) some of the confusion stems from interpreting assessment terms in different ways. Some use a narrower definition and require a counterfactual for evaluation, while for others, the focus is mainly on mapping longer-term outcomes and impacts – a process that does not require a counterfactual.

Another important reason motivating the debate is the push for randomized control trials (RCTs), which require many resources and rigid design requirements. A 2006 article by members of The Evaluation Gap Working Group, led by the Center for Global Development, is widely credited with the adoption of RCTs in the development sector. RCTs had been an evaluation tool within the medical and pharma sectors for decades, and then made the leap to evaluating anything from microfinance to impact investments. A widely cited example (one that has been discussed in a past panel) is that of d.light, a company that sells solar home systems in emerging markets. The company’s grant provider pushed d.light management to conduct a RCT of customers’ health and educational outcomes for further funding. d.light argued that the RCT design would require that they offer the products for free or with heavy subsidies, partner with unfamiliar distribution partners and target poorer households than planned. Together those factors, the company concluded, would make a far less sustainable model for scaling. d.light’s proposed solution would allow selling products at market prices with planned distributors at scale in the areas they operate.

In tomorrow’s post, we will begin with the current state of assessment and share our stance on the debate. 

 

Heather Esper is the Program Manager of Impact Assessment at the William Davidson Institute at the University of Michigan (WDI), which is the parent organization of NextBillion. 

Yaquta Kanchwala Fatehi is a Senior Research Associate for the Performance Measurement Initiative.

 

Photo courtesy of qimono.

Categories
Impact Assessment
Tags
impact measurement, scale