Rob Katz

NYU: Measuring Impact, Valuing Investment

Last week, I had the pleasure of attending the Sixth Annual NYU Conference of Social Entrepreneurs here in New York. The day-long conference brought practitioners together with academics for a day-long discourse on measuring impact and valuing (social) investment.

(Before I get much further, some needed disclosure: I sit on the conference’s Practitioner Advisory Board.)

The day kicked off with Jed Emerson, arguably the most well-known expert in the field of strategic philanthropy and impact investing. Of course, Jed began his talk by demurring – he claims he’s not an expert in metrics nor impact investing; rather, the expertise is in the room and the larger community. I beg to differ, Jed – after all, your introduction of social return on investment (SROI) is one of the founding blocks on which impact investing has been built.

But I digress. To Jed, the question is simple: “Are we maximizing total performance while generating real impact for our multiple investments?”

Easier said than done, and from Jed’s quick (and clear) overview of the history of impact investing, it is clear that while much has been done, we still have a long way to go. Specifically, Jed focused on two challenges:

  • First, the need for social management information systems to track outcomes – or else, as he put it, “you generate crap data”.
  • Second, Jed pushed us to embed metrics into knowledge management, in order to actively improve practice.

Both of these are ongoing. The social MIS is on track – with the Impact Reporting and Investment Standards (IRIS) and the Global Impact Investing Network (GIIN) kicking off this year and Acumen Fund’s PULSE performance management tracking tool being released to the public by the end of 2009. Having incubated these from the idea stage to the pilot stage to the test stage and now launching them, the entire sector is holding its breath, waiting to see if they will stick as standards and enable us to get closer to not generating Jed’s dreaded (bad) data.

Following Jed’s talk was a great panel on impact investing, but as it focused primarily on domestic work, I won’t go into it here on NextBillion.

The afternoon, however, brought Laura Callanan from McKinsey to the stage. Laura, of McKinsey’s Social Sector Office, offered an excellent 10-point guide to good program evaluations. If you are involved with aid or social investing, LISTEN UP. This may seem obvious but taken together has real potential to help us all (myself included!) raise our game when it comes to program evaluations. Without further ado, the 10-point plan:

  1. Hear the constituent voice
    1. Context matters!
    2. Give feedback on what works and not
  2. Exercise rigor within reason
    1. Use the right tool for the job to maintain credibility and understand feasibility
    2. For example, randomized control trials are not the right tool for every intervention
    3. We’re in a bubble around the “gold standard” for measurement – careful not to get caught up in the bubble
  3. Drive assessment with learning
    1. Start with a question: “What are we trying to learn that will help us do our work better?”
    2. What are the unintended consequences? What is the external environment in which you’re working?
    3. Look at “expectation failures” – not necessarily a program failure, but something that simply did not live up to the expectations
  4. Don’t measure everything
    1. “Funders are asking for reams of information that they never really look at”
    2. Don’t shift the burden of information gathering to the grantees
    3. Hewlett Foundation, for example, is cutting down their grant application form and reporting forms based on asking themselves, “Do we use this information or not?”
  5. Design assessment and strategy together
    1. Assessment begins with the start of the program and continues throughout – not for a few months at the end
  6. Don’t let assessment sit on a shelf
    1. “How many people have read an audit and done something differently?”
    2. How can audits be more actionable?
  7. Collaborate, don’t dictate
    1. Make sure your grantees have the resources to provide you what you’re asking them for
  8. Build off and build up
    1. What has already been tried (and worked) and tried (and failed)
    2. Use existing knowledge and contribute more knowledge back into the sector
  9. Borrow, don’t reinvent
    1. Re-purpose existing assessment tools, don’t create your own until you’ve done a thorough search
  10. Foster a learning culture

When it comes to doing a thorough search (point 9), the new Tools and Resources for Assessing Social Impact (TRASI) will be a great tool. Be sure to check it out – currently in a beta version – and sign up for more updates.

All told, it was a lot of metrics for one day but I left NYU last week feeling that we were closer than ever to counting what counts. After all, Albert Einstein said it best:

“Not everything that counts can be counted, and not everything that can be counted counts.”

Categories
Uncategorized