Our Head of External Affairs Paul Clarke shares lessons from developing Brightside’s Theory of Change, and how such an approach can help universities improve evaluation.
The new Access and Participation Plan approach from the Office for Students promises a big change from the current annual access agreements regime, making higher education providers part of a concerted drive towards some sector-wide goals to eliminate the gaps in entry rates, non-continuation and outcomes between groups based on their background, ethnicity or disability.
To drive progress and allow providers to concentrate on delivery, access and participation plans will move to a more strategic five year cycle, and more focus will be placed on impact and evaluation. This includes the establishment of a new ‘What Works Centre’ – the Evidence and Impact Exchange – which will provide evidence on the impact of different approaches to widening access and improving outcomes and progression for disadvantaged students.
Rigorous evaluation and impact measurement is essential for effective outreach work, but it isn’t easy. Understanding the changes we want to make at the individual level and how to make them happen is crucial for designing programmes that will contribute to achieving the widespread changes needed across the sector.
While the language in relation to impact measurement can often feel dry and technical, the reason for doing this work is fundamental to our purpose. Without knowing how effective different widening participation approaches are, we risk not just wasting money but also the time of the students who are most in need of effective and high-quality support.
Brightside’s approach has been to develop a Theory of Change, which identifies the ultimate goal of our mentoring, then the milestones on the way, and the activities and resource required to reach them. Through interviews with our mentees we formulated the ultimate goal of our mentoring to be helping young people make ‘confident and informed decisions’ about their future. We work with a wide range of different target groups, such as white working class boys or young people with disabilities, yet no matter who they were, young people often felt confused about the choices they had to make about education and careers, and wanted their mentor to provide both the knowledge to make sense of them, and the personal support and encouragement necessary for them to feel confident they were making the right one.
We then examined how young people make decisions and the factors that influence them, involving a widespread literature review and consultation with young people and academic experts. This yielded a series of outcomes that young people needed to achieve, from positive behaviours such as self-efficacy through to specific knowledge about the university application process and the increased social capital that would enable them to access opportunities. We have incorporated this learning into our mentoring programmes. For example, we have designed our mentoring schedules to include a focus on setting goals, as informed by Snyder’s hope theory. Other topics include discussing personal strengths with mentees, as a means of improving self-efficacy and growth mindset. Our training now includes appropriate techniques mentors can use to encourage their mentees and focus their thinking. We have also produced a suite of e-learning resources providing important information around topics such as student finance, to improve mentees’ practical knowledge of higher education, and also the information resources we provide for mentees.
Working with social investment consultants CAN Invest, we identified a number of academically-verified scales and questions used by other organisations to measure these outcomes, ensuring the data we collected in evaluation surveys would be robust and could be benchmarked against other interventions. We recently received data from our first external impact report since implementing our new evaluation methodology, and were pleased to find young people had statistically significant improvements in outcomes such as improved knowledge and social capital, as well as in coping and hope.
The next stage of our journey sought to find out why this is happening. For this numbers can only tell you so much: you need to ask the people behind them to get the full story. We are now building upon the qualitative data we collect in our evaluations with more in-depth focus groups with mentees to find out about their experiences. Combined with the quality indicators we have developed for our mentoring, this is allowing us to draw connections between aspects of our delivery such as programme length or rapport with the mentor and the outcomes achieved, so we can refine successful aspects and apply them to other programmes, and address those that work less well in a cycle of continual improvement. Ultimately, impact data should be about improving performance, not proving it.
Our approach to measuring impact was highlighted in the recent OfS-commissioned research into evaluating pre-16 outreach activities, which also recommended closer collaboration on impact measurement between universities and third sector organisations. We are pleased that Dr Julian Crockford, Widening Participation Research and Evaluation Unit manager at the University of Sheffield, has said Brightside is ‘sector leading in its commitment to effective and robust evaluation’ and ‘has much by way of example to offer us HE-based outreach evaluators.’
We have been running sessions on our theory-driven approach for university widening participation teams, and are happy to share our learning and swap notes with other teams. The scale of the challenge we are looking to address means that we will only succeed if we come together as a sector to share support and ideas. Please contact me at firstname.lastname@example.org if you would like to find out more.