Social Investment Series Blog #5: How data, evidence and analytical tools can be used to understand ‘what works’
A key element of the Social Investment approach is an increased focus on using data, evidence and analytical tools to understand what works and what doesn’t when it comes to the services communities interact with.
But what does that look like in practice?
In this final post in our Social Investment series, we explore how data, evidence and analytical tools can be used to answer two fundamental questions:
How do we know if outcomes are being achieved? And if so, by whom?
How do we determine whether outcomes are a direct result of an intervention, or would these outcomes have occurred anyway?
In this post, we provide pragmatic and technical guidance to help you answer these questions about the services you are looking at.
These questions are essential to designing and delivering services that deliver for our communities. And for investors, these questions provide critical information for understanding where resources should be directed to deliver the best outcomes for whānau and communities.
Keep reading to learn more.
How can we use data to measure and monitor outcomes?
To understand whether an intervention is delivering for communities, we need a way to measure and track outcomes over time.
One way to do this is through outcome indicators.
Outcome indicators provide data about outcomes. These indicators allow service providers, agencies, evaluators or other users of this information to identify where changes or benefits have occurred as a result of an activity or programme. This could be over a short, medium or long term period.
When choosing an indicator to measure an outcome, there are several important things to consider. For instance:
Does the indicator have a strong link with the outcome we want to measure? While this may seem obvious, there are many occasions where these things are disconnected. One way to strengthen hypotheses around the relationship between outcomes and indicators is to look for existing research to support these connections.
What existing data is available? And what additional data will need to be collected? If multiple organisations are involved, this may involve coming together to map out who has what, and who should be responsible for collecting which components.
Over what intervals will this data need to be collected? While having ‘real time data’ may be attractive, it is not always needed. A good rule of thumb is to match the frequency of data collection to the frequency of decisions being made from this data. Further, if non service provider data is expected to be utilised (such as IDI data) you should consider how often this dataset will be updated as this may be less frequent than what is needed.
The analysis of outcome indicators can be improved through the inclusion of other descriptive data. For example, information like where participants live, who they received the service from, and other demographics such as age, gender and ethnicity.
By looking at both outcome and descriptive data together, investors and providers can understand if there are commonalities around who a service is working for and who it’s not working for.
This is especially important if from an aggregate lens it looks like a service is not achieving its outcomes, yet on closer inspection it is working well for a subset of the population. In this instance, it would be better to re-focus the target population to this smaller subset rather than divesting the service entirely.
While data can do a lot to identify ‘who’ a service may be working for, these findings may be limited. Most likely, these findings will need to be coupled with qualitative analysis to really understand why a service is effective.
How can we use data and analytical tools to understand whether outcomes were achieved because of the service itself?
Once we know whether outcomes are being achieved, and by whom, the next step is to determine if these outcomes are due to the programme itself.
As mentioned in our previous blog post, there is a recognised hierarchy of evidence which ranks the robustness of research methods that can be used to undertake this analysis.
We can use this hierarchy of methods to estimate a counterfactual i.e. what would have happened without the programme. This allows us to assign a degree of attribution to a programme.
Methods for calculating a counterfactual primarily fit into two categories – experimental and quasi-experimental methods.
One commonly used experimental method is the Randomised Control Trial (RCT). RCTs assign participants randomly to treatment and control groups. Both groups are then analysed to see how effective the intervention was.
A common misconception is that counterfactuals can only be constructed via a randomised control trial. This leads to concerns that analysis requires the use of a control group, which would involve withholding service access from those who need it. Thankfully, there are quasi-experimental methods and techniques that use observed data to construct comparison groups without the need to withhold services.
Below are several examples of methods that fall under the quasi-experimental umbrella.
These methods can be rather technical in nature. However, they can be applied to understand attribution depending on the type of data available.
Difference-in-differences
If data is available for participants and a comparable group of non-participants before and after a service was delivered, a difference-in-differences (DiD) model can be used. This method compares changes in outcomes over time (pre and post ‘intervention’) between participants and non-participants. This method helps us isolate the effect of the service.
Propensity matching
If data is available for participants and non-participants but is only available post programme, propensity matching can be used. This method uses observed data points for each person to calculate a propensity score (the probability of receiving the service). Participants can then be matched with non-participants using this score. This way differences between the groups are balanced, enabling a robust estimate of the difference in outcomes.
Instrumental variable
An instrumental variable approach relies on there being an external factor (an ‘instrument’) that influences selection into a service but does not affect the outcome of interest. For example, if a programme is available in different geographic regions which are each home to similar populations, a comparison can be made between those who received the service and those who didn’t.
Decisions about what method is most appropriate is often dependent on what data has been collected. We often work with clients to understand the art of the possible. But before we do this, we consider the following types of questions:
Can you obtain information from before and after the programme?
Can you obtain data from similar groups who didn’t receive the programme?
What other factors might have influenced results? Can these other influences be measured?
Is the data of sufficient quality?
How robust should my approach be when answering questions around effectiveness?
Not all assessments as to ‘what works’ are equal, and nor should they be.
Smaller, low risk investments shouldn’t require a gold standard analysis approach, unless they are intended to be used as a prototype to be rolled out across the board. Whereas larger investments, investments that are considered riskier to individuals, or that could be rolled out across board may require more robustness when it comes to understanding effectiveness.
Any analysis around effectiveness needs to be fit for purpose, and the benefits of the analysis should outweigh any associated costs. As well as considering the investment required to undertake an analysis (and who funds it), another major consideration is the degree of participant burden to collect data. We discuss this further below.
How can I support a more purposeful approach to data collection?
There is always a cost to collecting more data.
On the flipside, you may be tempted to collect as much data as possible to use for future analysis. However, collecting data for the sake of it is counterproductive. This is true from both a privacy and ethics standpoint and from a data quality perspective.
Purposeful consideration about what is required to measure outcomes and understand attribution can help ensure that the right data is being collected at the right time. Ideally, this consideration would take place before the delivery of a service commences.
If designed well, data collection can derive dual benefits – providing both operational insights and information about effectiveness.
One way to reduce additional burden is to utilise data that is already available – for example, by using administrative data. Use of this data can allow us to measure more objective elements of success, such as whether participants achieved NCEA level 2 or whether a person retained a job after an intervention and for how long.
However, there are limitations to this approach. Administrative data tends to be more deficit based and likely won’t capture the full picture of outcomes the programme aims to achieve.
Further, as highlighted in our last post, it can support understanding of outcomes achieved after the period in which services were received (if you are interested in understanding more about this, please reach out!)
This is the final post in our Social Investment series. As we touched on at the beginning of this series, the government’s Social Investment approach is still being defined, and many questions remain around how it will work in practice.
Regardless of how the government’s concept of Social Investment evolves, evidenced-based decision-making will always be important to driving better outcomes and better services for communities.
We hope this series has provided a solid foundation around how data and analytical tools can be used to support better decisions around the design and delivery of our social services.
By understanding the tools available, investors and providers are better equipped to deliver services that are effective, efficient and support better outcomes for our communities.
If you would like to revisit any of our previous posts in this series, you can find them on our stories page.
If you would like to further discuss the material covered in this series or understand how to apply this knowledge within the context of your organisation, reach out for a kōrero – we’d love to connect! hello@nicholsonconsulting.co.nz