August 4 2016

Know thy impact

By Dr Squirrel Main

‘Impact measurement’— it seems to be the phrase on every philanthropist’s tongue.

Measuring the long-term changes catalysed by the organisations and people using your donations seems a simple enough concept but when faced with a myriad of incomplete spreadsheets and missing data, the reality can be very daunting. And that’s when you actually know what you are measuring.

So what’s the key to getting it right? Plan early. Know where you are going and you are less likely to go off course. After all, it’s easy to run fast if you have no idea where you are going.

First, you want to ask yourself: “What’s our endgame?”  Is your foundation interested in the eradication of malaria? Or the promotion of ballet in regional areas? Or increase the school readiness of all Shepparton toddlers? You may have multiple goals, but it’s important to nail them down. Be specific about what you want to achieve. 

Part of my role at The Ian Potter Foundation has been to work with the staff and our Board of Governors to focus and narrow the guidelines to improve efficiency – and to focus our giving so it is more effective. We have refined guidelines for four of our nine program areas, and the results have been rewarding – fewer declined applications and higher quality proposals (time will tell the value of the long-term outcomes – that’s how I hold myself accountable!)

Once you have identified your endgame, it’s tempting (and far simpler) to keep goals high-level and vague. After all, measurement can take time, energy and money. But measurement, if done well, can ensure that you are funding efficient and effective operators.

For instance, ’school readiness’ can be measured by the Australian Early Development Census scores – are you looking to raise the scores of all children in an area above a minimum standard? Or to decrease the percent of children in the lowest quartile? If it’s the latter, you may choose to channel funding to specialised early childhood services. If it’s the former, you may seek out the global programs in the area that can demonstrate improving AEDC scores for all participants.

And if you’re not sure, consult with experts. No one on our staff or Board has a PhD in water management, but our environment committee spelled out w-a-t-e-r as a focus area. So we contacted universities, peak bodies and professional networks. In mid-July, we convened a dozen water experts at the Australian Rivers Institute at Griffith University to help us understand what needs to be done to protect and improve Australian water resources, the role we could play and how we would measure success. 

Once you’ve set the goal – what next? Let’s say you decide to increase ballet performance audience numbers to 1,000 in five regional areas. And you’ve received requests from 10 different organisations. What next?  Simple: do your research to pick the organisations you think will do the best work. Then build or buy a database – and use it!

Databases and numbers, while at times daunting, can be quite powerful. The Annie E Casey Foundation funded researchers at UC-Boulder to sort through the programs for at-risk youth to find the solid, numeric proof that they worked. Of the 600 original submissions, only 11 met the standard (there are more now).  The city of New York decided to ONLY fund foster care agencies that used programs from the list. Within two years, NYC reduced the number of young people in foster care from over 15,000 to under 10,000.  Clear results. We owe it to our communities to fund programs that work.

At The Ian Potter Foundation, we use a database called GIFTS, but SmartyGrants, Salesforce, Microsoft Access or any of the others is a good starting point. Your database is essentially a collection of information about who you’ve funded, what the projects have been, where, for whom and – the crux – how successful have they been in delivering that ‘endgame’.

Since I began at the Foundation 18 months ago, we have been back-entering historical grants from the last five years. It’s quite fun to play with the database – and very illuminating. Some examples:

  • We entered the location of all our projects. Our intern exported this information to Tableau Public and the maps clearly showed that our environment funding in Victoria and Western Australia was higher than our efforts in the Northern Territory. The Board has since sent our Program Manager, Louise Arkles, on a scoping trip to Darwin to meet with NT-based organisations.
  • We also considered different types of support – infrastructure, capacity building, project funding – and noticed that fellowships had particularly strong outputs and outcomes (as measured on scales from 1–5 and 1–3). This observation led us to narrow our science guidelines to encourage more fellowships. In the words of our CEO, Craig Connelly, ‘Take risks on the project, not the organisation.’
  • We gathered data for questions that were interesting to us. For example, does funder collaboration lead to stronger outcomes? We used our database to graph the percentage of the total project funded by IPF against achievement of outputs/outcomes. The results are clear: more collaboration equals stronger outputs and outcomes. Our medical research guidelines now encourage applicants to have collaborative partners. And I encourage you to pursue the questions that interest your foundation.
Graph showing overall project ratings increase when teh project is larger and otehr funders collaborate

Overall project ratings are higher when IPF funds are part of larger projects with larger collaborations of funders.

Do observations have to be numbers? No. We also use our database to learn from experience. We have a field in our database called ‘Learnings for foundation staff’. As a result of these lessons we now call all grantees six months into their project just to check in, and we routinely ask questions such as, ‘Has this been done overseas? And if so, did it work?’ The answers can be illuminating!

We’ve learned to determine what the need is, for example, is it where you’ve traditionally donated, or do the AEDC scores say that somewhere down the road has greater need for early childhood services? And we’ve learned to ask for baseline data - how many children need the service? How many can the organisation serve?

We’ve also learnt several lessons about with project evaluations:

  • Set realistic timelines.
  • Create a pool of trusted evaluators.
  • Hold ‘Welcome to the family’ workshops for new grantees to review timelines, goals and expectations.
  • Involve key stakeholders as early in the process as possible.
Graph showing longer grants achieve higher rates of KPIs being met

The longer the duration of the grant, the greater proportion of Key Performance Indicators (KPI) are met.

It is tempting to focus on the negative, but a process known as appreciative enquiry is also quite useful. It’s a process where you focus on your best grants and think about what went well. We looked at our 10 most successful grants within our health and disability program area and found that these exceptional projects were all fully funded. They were also more likely to be about providing employment opportunities as opposed to, say, projects that funded adaptive equipment purchases. And so we’ve adapted our guidelines in that program area so that we can do more of what we do well – make grants that improve employment opportunities for people with disabilities.

Finally, I have found the support of others in the sector to be invaluable. We host quarterly Philanthropy Evaluation and Data Analytics (PEADA) meetings where we discuss everything from the best questions to ask on a final report to what’s happening in the UK.  Our last meeting was spent discussing the need for a single, unified outcomes taxonomy.  Our next meeting will focus on key learnings.

It’s an exciting phase for our sector, and I’m looking forward to seeing the outcomes.

This article was published in Generosity, August 4, 2016.