-
Process evaluation
A process evaluation focuses on the implementation process and attempts to determine how successfully the implementation (inputs and outputs) followed the intended plan. Process evaluation typically combines qualitative insights and quantitative data.
When conducted early on during an intervention pilot, a process evaluation can inform improvements to the implementation of an intervention when it is scaled up.
When conducted mid-way during implementation it can inform improvements.
When conducted at the end of an intervention it can inform understanding how an intervention worked, to inform future schemes.
-
Lessons learnt
The identification of lessons learned on what influences the impact of the intervention and how impact can be improved. Lessons can be learned at any stage and used to improve implementation either during the latter stages of implementation or for future implementations. These are sometimes termed “outtakes”.
-
Impact evaluation
An impact evaluation aims to measure changes in outcomes and impacts that can be attributed to the intervention. It is mainly quantitative assessment of incident and implementation data. This is typically completed mid-point or at the end of an intervention.
A mid-point impact evaluation can inform decisions on whether or not to continue the intervention, or whether there is a need to change it.
An end point impact evaluation provides evidence of effectiveness, to inform decisions on future trespass prevention and detection – as part of a Summative evaluation (see below).
-
Summative evaluation
An end of programme evaluation of the overall impact of an intervention, its relative cost effectiveness and whether there is evidence to support its wider implementation and/or improvement of future versions. Summative evaluation combines process, impact and lessons learned evaluations. Summative evaluations tend to be completed for larger interventions. It helps judge whether one intervention is more cost effective than another and draws out lessons learned for a scaling up of an intervention.
-
Consulting on process evaluation and lessons learnt to determine feasibility
Process evaluation and learning lessons often involves engaging with the implementation team and asking for their support in providing data and information. It is, therefore, important to consider what information is available and to ensure that the evaluation is proportionate and does not interfere with the implementation teams’ activities.
It is recommended that there is consultation with the implementation team. If the answers to the following questions are yes, then that element of a process evaluation may be possible and worthwhile.
- Can data/information be collected and recorded on the extent and timing of implementation?
Ideally the implementation team is briefed in advance on what data is needed to support an evaluation and what data/information might need to be collected. If yes, the extent of implementation can be evaluated.
- Can a reasonable level of engagement be achieved with the implementation team and target audience?
The implementation team should be consulted early on and agreement reached on the form and extent of interviews, questionnaires and workshops that are needed to gain feedback and insights. If yes, a qualitative evaluation of the implementation process and lessons learned may be possible subject to question 3 below.
- Do ethical and safety risks of engaging with the implementation team, stakeholders and target audience, such as for the purpose of interviews, prohibit engagement?
Any risks associated with engaging with people or visiting locations need to be identified and assessed. These may include physical hazards associated with site visits (such as viewing fenced sites), risk of assault when consulting members of the public/offenders and ethical risks of engaging with young or vulnerable persons.
The identification of ethical and risks does not preclude a process evaluation. An ethics and safety plan may be devised.
- Will the intervention have been implemented long enough for stakeholders to be able to offer insights and feedback?
In order for stakeholders to be able to offer feedback and insights, the intervention needs to have been at least partly implemented. For example, 5 out of 50 school visits have been completed, or 10 out of 100 stations have had witches’ hats installed. As the actual implementation may be delayed, the scheduling of process evaluation needs to be responsive to the progress of the implementation.
- Is the intervention new?
If the intervention is new or is being applied in a new context, this increases the possibility of there being new lessons to be learned.
-
Consulting on Impact and summative evaluations to determine feasibility
It is important to ensure that it is feasible and meaningful to perform an impact and/or summative evaluation. If the answers to the questions are ‘yes’, then those elements of impact evaluation are more likely to be feasible and worthwhile
- Has the intervention been effectively implemented?
If the intervention has not been effectively implemented (as per the findings of process evaluation), then there may be no value in attempting to measure its impact. For example, if the number of actual police patrols is 10% of the planned number, perhaps due to other unforeseen policing needs, the extent of implementation may be insufficient for an impact to be measurable.
- Is there sufficient and reliable incident data to support an impact assessment?
There are a number of considerations:
- Can incident data be accessed?
- What data exist for the impacts being measured?
- Incident data is required for comparable before and after periods
- Information on the cost of implementation may allow for cost-benefit analysis.
- Is the data reliable and trusted? Reliability includes:
- What is the potential for omissions or misreporting of incidents?
- Might the recording of incidents change before or after the launch of the intervention or might different areas or different record incidents differently?
- Is it possible that the intervention will lead to a higher level of detection and recording of incidents (e.g. an increase in patrols or CCTV leads to a higher level detection and recording)?
- Does the data directly measure the intended impacts?
If an intervention is targeted on a specific type of trespass, such as fare evasion, does data distinguish between fare evasion and other forms of trespass or not?
If the intervention is targeted on incidents with a high risk of injury, does trespass data provide a means of recording incidents with and without a high risk of injury?
If an intervention intends to reduce all forms of trespass, then there is less need for data on specific categories of trespass.
- Is there sufficient and reliable intervention data to support a cost-effectiveness evaluation?
If data is available on the quantity of implementation (such as how many school visits were completed), this may allow an assessment of cost-effectiveness to be performed.
- Is the intervention large enough for it to have a measurable impact?
The scale of the intervention will influence the ability to measure impact using incident data. For example:
- Running police patrols for 5 hours per week for one month at a single station intervention may not have a measurable impact.
- Running police patrols at 10 stations for 8 hours per day each for six months may have a measurable impact.
If each instance of an intervention is small scale, an option is to perform an evaluation for all locations benefiting from the intervention.
- Is the trend in incidents for the period before implementing the intervention clear enough for an impact to be reliably measured?
If the number of incidents fluctuates a lot in the years prior to the intervention, it can then be difficult to reliably identify a change in the number of incidents after implementing an intervention.
- Has the intervention been implemented long enough for it to have had a measurable impact?
The time taken for an intervention to have a measurable impact needs to judged. The assessment period should then be planned to align to the judged impact timescale. For example:
- A physical barrier or patrol may have an immediate impact if the fence or patrol covers the entirety of the targeted location. In this example an impact assessment could be completed within a few months.
- A schools-based education scheme that engages 5% of the target school each year may take many years to have a measurable impact on incidents. In this example, an impact assessment may need to be completed over a period of 5 or more years.
- Is comparable data available on the cost effectiveness of other interventions?
If data is available on the cost effectiveness of other interventions, this creates the possibility of comparing between interventions as part of a summative evaluation.
- Is data available on how other factors may have influenced the change in the number or outcome of incidents?
A key aim of Summative evaluation is to check whether a change in incidents can be attributed to the intervention or due to other factors, such as changes in the number of passengers, changes in incident reporting/recording or changes in the local population. It is possible that a change in incidents is partly due to an intervention and partly due to other factors. If these other factors can be hypothesised and data is available on them, then this assessment may be possible. This is sometimes called a “counterfactual evaluation”. This means determining what would have happened in the absence of the intervention.