Updated Mar 06, 2026
At Student Voice AI, we work with universities that want to measure belonging with enough rigour that they can act on it. A recent paper in Studies in Higher Education by Aike S. Dias-Broens, Marieke Meeuwisse, Marleen H.M. de Moor and Sabine E. Severiens tackles a problem UK teams often run into: you can track belonging over time, but only if your survey scale means the same thing at each timepoint and for each group of students. The authors test whether a multidimensional belonging scale stays comparable across the first year, and what the results imply for interpreting belonging gaps for students with a migration background. Read the paper here.
Belonging is often treated as a stable construct, something students either do or do not have. In practice, institutions measure it repeatedly across the year (induction, mid-semester pulses, end-of-year surveys) and use the results to judge whether interventions are working. The risk is that we over-interpret small shifts in scores as real change, when part of the change may be that students are interpreting the questions differently as they settle in.
Dias-Broens et al. focus on a multidimensional measure, the University Belonging scale covering Acceptance, Recognition, Commonality and Support (UB-ARCS). They collected responses at three points during first year (T1: N = 372, T2: N = 273, T3: N = 254) and tested whether the scale was measurement invariant across time and across groups (migration background and generation status in higher education). They then used linear mixed modelling to examine how belonging changed during the year, and whether that change differed across groups.
One headline finding is methodological: belonging measures can drift over time, even within a single academic year. The UB-ARCS scale was stable enough to compare in the first semester, but not across the full year. In other words, the meaning students attached to the belonging items appeared to evolve as first year progressed.
"While the UB-ARCS was measurement invariant in the first semester (T1–T2), neither full longitudinal measurement invariance across all three timepoints nor multigroup measurement invariance for migration background at T3 was achieved."
That matters for UK HE because many student experience dashboards implicitly assume comparability. If you are comparing April to October, or comparing one cohort to the next, you are assuming students understand and use the scale in the same way. This paper suggests that assumption does not always hold, especially later in the year.
At the same time, the study does not suggest belonging is unknowable. Using linear mixed modelling, the authors found a significant increase in belonging across the year. The practical message is to separate two questions: is belonging changing, and are we measuring it consistently enough to interpret the change as a true shift rather than a shift in meaning?
The results also highlight an equity pattern that UK institutions will recognise. Students with a migration background consistently reported lower belonging, even though the overall trend was upwards. The authors did not find differences in the strength of the increase across migration background and generation status in higher education. Put simply, the gap persisted.
For UK Student Experience teams, Market Insights professionals, and Pro-Vice-Chancellors for Education, three practical moves follow from this paper.
First, treat comparability as a requirement, not a nice-to-have. If belonging results are used to allocate resource, evaluate interventions, or report on equality gaps, build in checks that your measure is behaving as expected. Where you have the capability, test measurement invariance formally. Where you do not, use pragmatic safeguards: keep a stable set of core belonging items, review item-level shifts (not only the overall score), and avoid over-interpreting small changes when students are at different stages of their journey.
Second, combine belonging scales with open text, and align the questions to the construct. One reason scale meaning can drift is that students’ reference points change. Open-text prompts can surface what they now think “accepted”, “recognised”, “in common”, and “supported” means in practice. For example: “What has most affected your sense of belonging this term?”, and “What has made it harder to feel part of your course or institution?”. When analysed at scale, this kind of feedback provides the mechanism-level explanation that a scale alone cannot.
Third, segment belonging and act early when gaps persist. The paper’s finding on migration background is a reminder that average improvement can coexist with stable inequity. In UK terms, the comparable move is to review belonging themes by key student groups (for example, commuter status, first-in-family status, international domicile, and ethnicity), then prioritise interventions that shift the lived experience, not only the narrative. Because belonging develops through everyday interactions and systems (teaching practices, communications, peer networks, and support processes), the most effective interventions are often operational.
This is where Student Voice Analytics fits naturally. It helps institutions categorise and benchmark open-text comments about belonging and inclusion, and compare patterns across cohorts and timepoints. Used alongside survey scales, it makes it easier to see whether “belonging is up” because friction is genuinely being removed, or because students have recalibrated what the questions mean.
Q: How should UK universities use belonging survey results without over-interpreting them?
A: Use belonging as a trend signal, not a standalone verdict. Compare like with like (the same timepoint each year where possible), keep a stable core of items, and look at item-level movement rather than only a single index score. Then triangulate: pair the scale with open-text prompts that explain what changed, and review results by student group so you can spot persistent gaps that an average would hide.
Q: What is measurement invariance, and why does it matter for student experience surveys?
A: Measurement invariance is the idea that a scale measures the same construct in the same way across groups or time. If invariance does not hold, differences in scores can reflect different interpretations of the questions rather than genuine differences in the underlying experience. In this study, the belonging scale was comparable in the first semester, but comparability weakened across the full year, which is a warning against assuming that “belonging went up” always means the same thing at each timepoint.
Q: What does this imply for student voice work beyond belonging surveys?
A: It is a reminder that survey metrics are only as useful as their interpretation. When institutions rely on single-number indicators, they risk missing both shifts in meaning and the lived mechanisms underneath. Student voice work is strongest when scales, segmentation, and open comments are used together: scales provide monitoring, segmentation identifies who is being left behind, and comments explain what is happening in the system so teams can design targeted changes.
[Paper Source]: Aike S. Dias-Broens, Marieke Meeuwisse, Marleen H.M. de Moor, Sabine E. Severiens "First-year students’ sense of belonging in higher education: examining measurement invariance and longitudinal development across migration background and generation status in HE" DOI: 10.1080/03075079.2026.2631047
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.