Published Mar 02, 2026 · Updated Mar 02, 2026
At Student Voice AI, we often talk about how to analyse student feedback well, but that starts with a more basic question: how do you get enough students to take part in the first place? A 2025 paper in Higher Education by Yue Zheng and Qian Ma looks directly at what motivates students to participate in teaching evaluations, using a hypothetical scenario experiment with 1,061 undergraduate students in China. The study is highly relevant to UK teams running module evaluations and other feedback mechanisms, because response rates shape both how representative your data is and how much free-text you have to work with. Read the paper here.
Universities rely on student evaluations of teaching for module enhancement, staff development, and (in some contexts) high-stakes decisions. But participation is not automatic. Students have competing demands on their time, may not believe anything will change, and may respond selectively depending on whether they are particularly happy or unhappy with a module.
Zheng and Ma’s question is practical: which levers are most likely to increase participation in teaching evaluations? They test three common approaches that map closely to real-world practice: incentives, messaging about impact, and attempts to reduce burden by shortening surveys.
Incentives mattered most. In the experiment, a grade-related bonus was the strongest lever for increasing students’ willingness to complete teaching evaluations. UK institutions may not want, or be able, to use grade incentives, but the result is still useful because it quantifies the general point: participation can be sensitive to tangible benefits, not just goodwill.
Explaining that evaluations affect lecturers’ careers also increased participation, but less than incentives. This is a reminder that “closing the loop” communications are not only about accountability; they are also about motivation. Students need a credible story about what their feedback is for, and how it is used.
Shortening the evaluation did not have a significant effect on participation. If you are relying on survey length alone as your response-rate strategy, this is a useful caution. Reducing burden can still be good design, but it may not solve the participation problem by itself.
Who responds may change depending on the lever you pull. The authors discuss two mechanisms that can skew evaluation data: students may be more likely to respond when dissatisfied (a “vent” pattern), or when satisfied (a “return” pattern). That is crucial for UK HE teams, because improving participation is not just about volume; it is about avoiding a dataset that over-represents particular groups or sentiment.
“Vent effect”: students use SETs to express frustration; “Return effect”: students with positive experiences are more likely to participate.
For UK universities, three implications stand out.
First, treat participation as a design problem, not a reminder problem. If shortening evaluations does not move the needle, focus on making the request feel meaningful: explain purpose, show impact, and make the timing and channel easy (for example, in-class completion, mobile-friendly links, and clear time expectations).
Second, be explicit about the ethics of incentives. Even if grade incentives are off the table, you can still test other approaches such as prize draws, recognition for student reps, or programme-level “you said, we did” reporting. The principle is to respect student effort and make the value exchange clear.
Third, measure representativeness alongside response rates. A higher response rate can still hide bias if certain groups are less likely to participate. This is where Student Voice Analytics can support the operational loop: if you increase participation, you should also see more stable theme coverage in open-text comments, fewer “thin” modules with too little qualitative data, and clearer benchmarking across cohorts.
Q: How can we improve module evaluation response rates without using grade incentives?
A: Start with credibility and convenience. Tell students what the evaluation is for, give one or two examples of changes made from past feedback, and make completion effortless (mobile-first, short estimated time, and a clear link in the VLE). Then test one change at a time, such as in-class completion windows or revised prompts, and track response by module and student group.
Q: What should we be cautious about when applying a hypothetical scenario experiment to our own context?
A: Hypothetical choices do not always translate into real behaviour, and the study was conducted in a specific national context. Treat the findings as evidence about plausible mechanisms, then validate locally by piloting changes and comparing both response rates and the quality of free-text comments across cycles.
Q: Does higher participation automatically make teaching evaluation data “better”?
A: Not automatically. Higher participation improves statistical stability, but what matters is whether the respondents are representative and whether the feedback is specific enough to act on. The most useful evaluation systems combine a healthy response rate with well-designed questions and robust analysis of open-text comments, so teams can move from “a score moved” to “we know what to fix”.
[Paper Source]: Yue Zheng, Qian Ma "Motivating student participation in teaching evaluations: evidence from a hypothetical scenario experiment in China" DOI: 10.1007/s10734-025-01534-9
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.