Student evaluations help teaching improve when staff can discuss them

Updated Apr 05, 2026

At Student Voice AI, we see a recurring institutional problem: universities collect module evaluation data, circulate dashboards, and assume improvement will follow. Niels A. van der Baan, Tim P. J. N. B. Hennemann, Renee E. Stalmeijer, Diana H. J. M. Dolmans and Carolin Sehlbach's Assessment & Evaluation in Higher Education paper, "Teachers' continuing professional development: using student evaluations to start a dialogue", shows why that assumption is too weak. For universities using student evaluations of teaching, the real issue is not only whether feedback is collected, but whether staff have the support to interpret it, test it, and learn from it.

Context and research question

Student evaluations of teaching often sit between two institutional purposes. On one side, they are used for quality assurance, performance monitoring, and course review. On the other, they are supposed to help individual teachers improve. Those purposes are related, but they are not the same. A survey process designed mainly for accountability can easily produce feedback that feels high stakes, thin on context, and hard for staff to use well.

This paper asks a practical question that matters for UK higher education teams: how do teachers actually use student evaluations to shape their continuing professional development? The authors explored that question through four semi-structured focus groups with 24 teachers at Maastricht University, then analysed the data thematically through the lens of feedback literacy. That makes the study especially useful for Student Experience teams, educational developers, and PVCs who want evaluation systems to drive better teaching rather than just better reporting.

Key findings

The first important finding is that student evaluations support continuing professional development mainly through informal learning, not formal training. Teachers did use SET feedback to reflect on and adjust their teaching, but the value did not come from receiving a report alone. It came from the work staff did afterwards to make sense of what students were actually saying and what, if anything, needed to change.

Dialogue was the central mechanism that made feedback usable. Teachers described discussing evaluation feedback with students, peers, colleagues, and supervisors. Those conversations served different functions: making sense of comments, judging whether the feedback was credible, and working out how to respond. That matters because it reframes student evaluations as the start of a developmental process, not the end of one.

The paper's conclusion puts that point clearly:

"teachers have to engage in a two-way dialogue with students, peers and colleagues"

Teachers also judged evaluation feedback contextually rather than mechanically. According to the abstract, how staff weighed feedback depended on several factors, including their experience and role, the seniority of students, and whether a point was repeated. In practice, that means a single negative remark does not carry the same weight as a pattern that appears across cohorts or modules. For UK institutions, this is a useful reminder that student voice systems need ways to distinguish repeated themes from isolated noise.

The broader implication is that institutions should stop treating evaluation reports as self-explanatory. If feedback is sent to staff without time, structure, or peer discussion, a lot of its developmental value is lost. The paper therefore recommends that universities create opportunities such as peer-to-peer coaching sessions where teachers can discuss what they received. That is a more defensible way to improve educational quality than assuming every lecturer will independently decode a spreadsheet or comment dump.

Practical implications

For UK universities, the first implication is to design a post-survey process, not just a survey instrument. If module evaluations are meant to improve teaching, institutions should create structured opportunities for staff to review themes with programme leaders, peers, or educational developers. Without that step, student feedback is more likely to be experienced as surveillance than support.

Second, institutions should report patterns, not just raw comments. This paper underlines the importance of repetition and context when judging feedback. That is exactly where systematic analysis helps. If several cohorts raise the same issue about clarity, pace, feedback timeliness, or assessment guidance, that signal deserves more weight than a one-off remark. Student Voice Analytics fits naturally here because it helps universities categorise and benchmark free-text comments at scale, making it easier for staff teams to discuss evidence rather than anecdotes.

Third, universities should separate the questions they want student evaluations to answer. Some comments relate to a specific teacher's communication or organisation. Others relate to course design, assessment structure, timetabling, or wider programme issues. When those layers are blurred together, staff can struggle to decide what they personally own and what needs a wider institutional response.

Finally, this paper is a useful prompt to close the feedback loop with staff as carefully as universities try to close it with students. If an institution wants academics to engage well with student voice, it should invest in the conditions that make that possible: interpretable reports, peer discussion, developmental framing, and support for acting on what the comments show.

FAQ

Q: How should a UK university turn end-of-module evaluations into something staff can actually use for development?

A: Start by pairing the survey output with a structured discussion process. That could mean programme-level review meetings, peer coaching, or short guided reflection sessions with an educational developer. The key is to move beyond sending staff a dashboard. Give them grouped themes, indicate which issues are repeated across cohorts, and create space to distinguish actionable teaching changes from broader course or institutional issues.

Q: What should institutions keep in mind before generalising from this study?

A: This is a qualitative study based on four focus groups with 24 teachers at one Dutch university. It is best read as strong practice-oriented evidence about how feedback gets used, rather than as a universal rule for every institutional setting. UK teams should treat the findings as a prompt to test their own evaluation workflows, especially how staff interpret comments and what support they receive after results are released.

Q: What does this change about how universities should think about student voice?

A: It shifts the focus from collection to use. Student voice is not only about getting students to complete a survey. It is also about making sure the resulting comments can be interpreted fairly, discussed productively, and translated into better teaching and course design. If feedback never becomes dialogue, institutions risk collecting more student voice without increasing its impact.

References

[Paper Source]: Niels A. van der Baan, Tim P. J. N. B. Hennemann, Renee E. Stalmeijer, Diana H. J. M. Dolmans and Carolin Sehlbach "Teachers' continuing professional development: using student evaluations to start a dialogue" DOI: 10.1080/02602938.2025.2584136

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.