Updated May 04, 2026
Student feedback is usually asked for after teaching has happened. Much less often are students invited into the process of reviewing teaching itself. That is why Emma Hollenberg and Mario Pezzino's Student Engagement in Higher Education Journal paper, "Students as Partners in Peer Review of Teaching: A Collaborative Model Involving the Students' Union", matters. For universities trying to make student voice more useful for teaching enhancement, the paper offers a practical shift: move students closer to the review conversation, not only the survey response.
Many universities already collect module evaluations, NSS comments, and representative feedback, but those channels do not always connect neatly to how teaching is reviewed or improved. Staff-only peer review can miss how teaching feels from the student side, while survey results alone can flatten the detail needed for a developmental conversation. That leaves a gap between student feedback and teaching enhancement.
This paper describes the introduction of a voluntary peer review of teaching model in a research-intensive UK institution, involving academic reviewers, student reviewers, and the Students' Union. It is best read as an implementation-focused case study rather than a controlled impact study. The central question is highly practical for UK higher education teams: what changes when students are treated as partners in teaching review, and what conditions make that partnership useful rather than tokenistic?
The paper's core contribution is to reposition students from commentators to collaborators. Instead of asking students to react only through end-point evaluations, the model brings student reviewers into the review process itself. That matters because universities already know, from work showing that student evaluations help teaching improve when staff can discuss them, that feedback becomes more useful when it starts a conversation rather than ending one.
The Students' Union is not treated as a side participant. The paper highlights the value of involving both internal stakeholders and an external student representative body when running the scheme. That broadens legitimacy and makes the process less like a private academic exercise.
The abstract captures that design choice directly:
"internal (academic reviewers and student reviewers) and external (Students' Union) stakeholders"
The paper also makes clear that partnership alone is not a guarantee of improvement. Its emphasis is not simply that student-inclusive review is a good idea, but that the scheme depends on the right conditions and has real challenges to work through if it is going to affect teaching practices, academic culture, and student experience. For UK universities, that is the useful realism in the paper. Bringing students into peer review is not a symbolic add-on. It changes the social conditions of review and therefore needs careful design.
A second practical insight is that voluntary teaching review can carry a developmental purpose that standard evaluation systems often struggle to hold. Surveys can show patterns, but they rarely create shared interpretation on their own. A collaborative review model gives institutions a way to connect student perspective, academic expertise, and enhancement action in one process. That makes the paper a useful complement to evidence that student evaluations improve when staff and students redesign them together.
For UK universities, the first implication is to stop treating student comments and peer review as separate evidence streams. If a department is already seeing repeated issues in module evaluations or NSS free text, those patterns should inform what a teaching review looks at. A defensible NSS open-text analysis methodology can help teams distinguish one-off frustration from repeated concerns before reviewers ever meet. The benefit is a more focused review conversation and less time spent arguing over anecdote.
Second, institutions should define the purpose of student-inclusive teaching review very clearly. Is the scheme developmental, quality-assurance-oriented, or a mixture of both? What can student reviewers comment on, and how is their input used? The paper's emphasis on conditions and challenges suggests that role clarity and trust are not optional extras. They are what stops partnership from becoming performative. The benefit is a safer, more credible process for staff and students alike.
Third, universities should use the Students' Union as an operational partner, not just a consultation point. That can widen participation, improve legitimacy, and help schemes reach beyond the same small group of already confident student representatives. The practical gain is better reach across the student body, especially where institutions want more inclusive routes into teaching enhancement work.
Finally, any university piloting this model should show what changed afterwards. If student reviewers help surface issues but nobody can see the response, the scheme will reproduce the same trust problem that weakens many survey systems. Closing the loop still matters here, just as it does in wider student voice initiatives. The benefit is stronger confidence that teaching review is producing visible improvement rather than another closed process.
Q: How should a university pilot student-inclusive peer review of teaching without overcomplicating the process?
A: Start small and keep the remit tight. Choose a voluntary pilot, define the developmental purpose clearly, brief academic and student reviewers together, and use existing comment data to identify which themes deserve attention first. That makes the review easier to scope and reduces the risk of asking student reviewers to comment on everything at once.
Q: What are the methodological limits of this paper?
A: This is an implementation-focused case study from one research-intensive UK institution, not a comparative trial. It is most useful as a model for scheme design and institutional reflection. As an inference from the paper's framing and abstract, the strongest value lies in the practical structure it describes rather than in a universal causal claim that the same model will work everywhere unchanged.
Q: What does this change about student voice more broadly?
A: It pushes student voice closer to the point where teaching decisions are interpreted and acted on. Instead of asking students only to supply comments after the fact, universities can involve them more directly in how teaching quality is reviewed. That gives institutions another route for turning student perspective into teaching improvement, especially when survey themes, reviewer dialogue, and follow-up action are connected properly.
[Paper Source]: Emma Hollenberg, Mario Pezzino "Students as Partners in Peer Review of Teaching: A Collaborative Model Involving the Students' Union" DOI: 10.66561/sehej.v7i3.1442
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.