Measuring up

< Back

The Foundation was thrilled to recently receive the 2017 SIMNA Award for Outstanding Social Impact Measurement Funding Organisation. This recognition confirms that the Foundation’s efforts to increase the impact and effectiveness of our grant-making are on the right track.

Currently, The Ian Potter Foundation is in the midst of a transition to more strategic grant-making, building on past successes to ensure consistent, effective and strategic philanthropy. At its core, this approach is simply about trying to ensure our funding has a greater, sustained impact that benefits as many Australians as possible. Measuring the social impact of grants helps recipient organisations to understand and communicate the social/economic value of their work. Measuring and reporting outcomes helps drive funding and increases competitiveness in the market. With effective, shared measurement organisations can benchmark themselves.[1]  Indeed, funding sustained programs that can consistently demonstrate long-term positive outcomes is often seen as the holy grail of strategic philanthropy. To put it more simply, if our recipients look good, we look good.

It also means we can continue to make informed choices. By making informed choices, we aim to fund programs and/or organisations that produce positive outcomes. For example, the Annie E Casey Foundation funded researchers at UC-Boulder to sort through the programs for at-risk youth to find the solid, numeric proof that they worked. Of the 600 original submissions, only 11 met the standard (there are more now). As a result, the city of New York decided to ONLY fund foster care agencies that used programs from the list. Within two years, NYC reduced the number of young people in foster care from over 15,000 to under 10,000. Clear results. As a philanthropic organization we owe it to our communities to fund programs that work.

As The Compass explains, Australia’s social progress has arguably been stymied because we have not concentrated enough on outcomes. Philanthropic organisations are nimble, and can often set trends within the social service sector. Supporting social impact measurement has the two-fold benefit of assisting the recipients to be as effective and efficient as possible (by having clear strategies, measuring outcomes and reflecting on their processes) as well as assisting The Ian Potter Foundation to be a strategic and proactive grant-maker targeting programs and organisations that are making a real difference.

Supporting grantees

So, how is the Foundation supporting our grantees to improve their social impact measurement? Firstly, we are supplementing social impact measurement via direct funding in three ways: as part of a grant, as impact enhancement and as stand-alone grants as illustrated by the following examples.

Integrated          As part of a project, for example, funding ThinkImpact to undertake social impact measurement with Les Twentyman’s INSPIREME2EMBRACE program or the Cape York Institute to hire a part-time internal evaluator to gather data about outcomes.

Enhancement    Providing supplementary ‘impact enhancement grants’ over and above the original grant, for instance, for the Karinya Young Women’s Service to contract a local evaluator to calculate the cost-effectiveness of its program; for the Museum of Contemporary Art to utilise Culture Counts to show outcomes from its family weekends; and for Anglicare to evaluate its TeachAR program.

Stand-alone       Providing stand-alone support for rigorous large-scale evaluations of multi-year projects such as the Randomised Control Trial (RCT) evaluation of right@home  (a collaborative partnership of ARACY, CCCH at MCRI and TReSI) and the RCT of the cost benefit of Child Protection Society’s Child and Family Centre (Melbourne Institute of Applied Economics and Social Research and Murdoch Children’s Research Institute).

Secondly, the Foundation supports grant recipients via workshops. We conduct ‘Grantee Welcome Workshops’ after each board meeting in which we review the new grantees’ KPIs and plans for measuring outcomes and evaluating impact. We also hold subject-specific workshops – e.g. disability employment (Nov 2016), medical research (Feb 2017) – convening academic experts, grantees, staff and Governors of the Foundation to discuss best practice in outcomes measurement within a particular field. The recent medical research forum was particularly exciting and is leading to sector-wide sharing of outcomes measurement tools.

Thirdly, our Research and Evaluation Manager undertakes face-to-face (or Skype) meetings with all grantees (> $100,000, n=86, 2015–present) to discuss project evaluation and dissemination strategies at the outset of the grant. These discussions have included discussing suicide data collection (Rural Alive and Well), measuring parent bonding (Kids Thrive); helping source valid, reliable survey questions (Upcyclers, Rolling XSite); gathering baseline data (Kids’ Own); drafting evaluation plans (YSAS); using G*Power software to assess sample size (Wheelchair Sports NSW), and proofreading draft evaluations (>15).

Fourthly, we have created measurement and evaluation resources for grantees including: a pool of high-quality evaluators (TIPFEP) and examples of evaluation documents, such as a request for tender. We acknowledge the value of grantee experience, and so compile and disseminate grantee learnings (gathered from final reports). We also facilitate conversations between grantees such as YWCA NSW sharing its use of Results-Based Accountability with Common Ground in Brisbane.

Lastly, we have shared insights on measurement and evaluation at sector meetings (QIMR/SIMNA event), on panels (Bush Heritage workshop for Australian Land Conservation Alliance), and at roundtables (e.g. Portland House Foundation and Donkey Wheel Ltd).

Evaluating ourselves

Over the past two years, we have been back-entering historical grants outcomes of all our grants closed since 1 January, 2010. This has involved reading evaluation reports, ringing grantees and coding outcomes from 1023 grants. The resultant learnings have led us to focus and narrow the Foundation’s funding guidelines to improve efficiency – and to focus our giving so it is more effective.

To date, the Foundation has refined guidelines for seven of our nine program areas, and the results have been rewarding — fewer declined applications and higher quality proposals. For example, we narrowed our Science guidelines to encourage more fellowships after an analysis of types of support (infrastructure, capacity building, project funding) revealed that fellowships had particularly strong outputs and outcomes. Similarly, we narrowed our Disability guidelines to focus on employment once we saw that employment-related grants had stronger outcomes than those focusing on mobility aids. In Medical Research, we used our database to graph the percentage of the total projects funded by the Foundation against achievement of outputs/outcomes. The results were clear: more collaboration equals stronger outputs and outcomes. Our Medical Research guidelines now encourage applicants to have collaborative partners. Across all areas, we saw stronger outcomes in larger (>$100,000) multi-year grants. As a result of this evaluative review process the Foundation is transitioning to funding objectives that will build on past successes to ensure consistent, effective, strategic philanthropy across program areas.

We also use grants outcomes information to influence future interaction with grantees. For instance, we analysed the location of all our projects using Tableau Public and the results showed that our environment funding in Victoria and Western Australia was higher than our efforts in the Northern Territory. The Board has since sent our Environment & Conservation Program Manager on a scoping trip to Darwin to meet with NT-based organisations.

We have also created a six-month telephone check-in system for all grants to pick up potential issues earlier and assist grantees to thrive. Where we noticed inconsistent reporting, we have re-worded our application, final reports and progress reports to be clear about the information (such as KPIs) that we aim to gather from grantees.

On a final note, The Ian Potter Foundation aims to promote impact measurement across the philanthropic sector by sharing our methods (and our learnings!). To this end, we have convened the Philanthropy Evaluation and Data Analytics meetings (now a Philanthropy Australia funders’ group), held masterclasses (for Philanthropy Australia and Australian Philanthropic Services) spoken at conferences (Generosity Forum), written articles (Generosity, Alliance), held individual ‘tutoring’ sessions with other funders (n=6), and hosted a Victorian-based government–funder roundtable on outcomes measurement.

Time will tell the value of all our initiatives as we start to see the long-term outcomes – ultimately that is how we intend to hold ourselves accountable!

 



[1]  Ogain, E., Svistak, M. & Casas, L. (2013). Blueprint for shared measurement. NPC for Inspiring Impact. http://www.thinknpc.org/publications/blueprint-for-shared-measurement/blueprint-for-shared-measurement-v2/

 

Tags

award,Dr Squirrel Main,evaluation,philanthropy

< Back

Hey, it seems you"re using a browser that is a little past its time and our website might not be able to perform as it should. If you’d like to have the best experience on our website, you can easily find out about updating your browser here

dismiss this message