Updated Apr 30, 2026
When universities explain weak engagement mainly through student background, they risk ignoring the parts of the experience they still control. Swarnima Sharma and Mamta Garg's Student Engagement in Higher Education Journal paper, "What drives Student Engagement in higher education? Exploring key demographic, institutional and personal variables", is useful reading for UK teams using student voice because it suggests institutional conditions predict engagement more strongly than demographics in a survey of 553 students. For universities trying to interpret engagement gaps, that is a practical shift in emphasis: look first at what students are experiencing, not only at who they are.
The paper is set against the massification and diversification of Indian higher education, but the underlying question travels well to the UK. Universities everywhere are dealing with more varied student circumstances, tighter scrutiny of outcomes, and persistent pressure to improve engagement without defaulting to student-deficit explanations. That makes it important to ask which influences on engagement are actually strongest.
Sharma and Garg examine that question through differential analysis and stepwise multiple regression on data from 553 students. The study tests three broad groups of predictors: demographic variables such as gender, socioeconomic status, and locale; institutional variables including modes of curriculum transaction and organisational culture and ambience; and personal variables including lifestyle, achievement motivation, and perceived relevance of the curriculum. For UK higher education teams, the study matters because it turns a familiar debate into a practical one: which levers should institutions prioritise if they want engagement to improve?
The first finding is that demographic background explained less than many institutions assume. According to the abstract, demographic variables had a non-significant influence on student engagement apart from gender. That does not mean demographic differences never matter. It does mean universities should be careful about treating low engagement mainly as a fixed trait of particular student groups when the stronger levers may sit elsewhere.
Institutional variables were the strongest predictors in the model. The overall regression explained 30.5% of the variance in student engagement, with institutional factors accounting for the largest share.
"Institutional variables demonstrated greater predictive strength, jointly accounting for 21% of the variance"
That is the central practical message of the paper. If teaching delivery, course climate, or the wider organisational environment are shaping engagement more strongly than most background variables, then disengagement is not simply something students bring with them. It is also something institutions help produce, reinforce, or reduce.
Personal variables still mattered, but they came second. Lifestyle, achievement motivation, and perceived relevance of the curriculum explained an additional 10% of the variance. That is important because it avoids a false choice between structural and individual explanations. Students' motivation and habits do matter, but the paper suggests those personal factors sit alongside, and partly within, the conditions institutions create.
The emphasis on curriculum relevance is especially useful for UK teams. If students do not see the curriculum as meaningful, connected, or worth investing in, engagement weakens. That aligns with wider higher education evidence that belonging, clarity, and relevance are closely linked, including recent findings on what new students need to feel they belong, and stay well. In practice, engagement is rarely just about attendance or effort. It is also about whether the academic environment feels coherent, supportive, and worth showing up for.
The first implication for UK universities is to stop treating engagement mainly as a student attribute. If institutional conditions are stronger predictors than most demographic variables, then the first questions should be about the experience itself: how teaching is delivered, whether students understand why the curriculum matters, how the campus or online environment feels, and whether support is visible and usable. That gives teams a more actionable starting point than broad assumptions about who is or is not engaged.
Second, institutions should separate structural patterns from anecdote. A single survey item on engagement is too blunt on its own. Teams need to know whether weak engagement is being driven by timetable friction, low curriculum relevance, poor communication, weak peer connection, or something else. That is why student survey data works better when universities benchmark and triangulate it. Pattern-level analysis is what turns engagement from a vague concern into something departments can actually improve.
Third, universities should treat open-text comments as diagnostic evidence, not just colour around the scores. This is where Student Voice Analytics fits naturally. If students repeatedly mention unclear teaching expectations, poor ambience, low relevance, or support that feels distant, those comments help teams distinguish between individual disengagement and institutional friction. That makes it easier to target interventions by course, cohort, or demographic group, which improves the odds that action lands where it will help most.
The broader lesson is straightforward. Engagement improves fastest when universities look at the design of the student experience, not only at the characteristics of the students moving through it. For Student Experience leads, PVCs, and insights teams, that is a more constructive frame because it points towards levers they can actually change.
Q: How should a university apply these findings if it already runs student engagement or experience surveys?
A: Start by reviewing which parts of the survey point to institutional conditions rather than general sentiment alone. Add or refine open-text prompts so students can explain what is helping or hindering engagement in practice, then compare those themes across courses and cohorts. A structured approach such as the NSS open-text analysis methodology helps teams move from broad engagement scores to specific institutional actions.
Q: What should teams keep in mind about the methodology before generalising from this paper?
A: This is a survey-based study of 553 students in one national context, and the use of stepwise multiple regression shows association rather than causation. The exact variables and their strength may not transfer neatly into every UK setting. The paper is most useful as directional evidence that institutional design deserves serious analytical weight when universities interpret engagement data.
Q: What does this change about student voice practice more broadly?
A: It pushes student voice away from deficit narratives and towards institutional diagnosis. If universities want to understand engagement, they need evidence not only on which groups appear less engaged, but on what students say about teaching, culture, relevance, support, and participation. That makes free-text feedback more than commentary. It becomes a practical map of where the student experience is helping or hindering engagement.
[Paper Source]: Swarnima Sharma, Mamta Garg "What drives Student Engagement in higher education? Exploring key demographic, institutional and personal variables" DOI: 10.66561/sehej.v7i3.1441
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.