What gets students to fill in teaching evaluations? Evidence on incentives and messaging

Updated Apr 03, 2026

Student feedback is only useful if enough students show up to give it. A 2025 paper in Higher Education asks a practical question every university survey lead faces: what actually gets students to complete teaching evaluations? Yue Zheng and Qian Ma test that question through a hypothetical scenario experiment with 1,061 undergraduate students in China. For UK teams running module evaluations and other student voice mechanisms, the relevance is immediate: response rates shape both how representative your data is and how much free-text you have to work with. Read the paper here.

Context and research question

Universities rely on student evaluations of teaching for module enhancement, staff development, and, in some contexts, high-stakes decisions. But participation is not automatic. Students have competing demands on their time, may not believe anything will change, and may respond selectively depending on whether they feel particularly happy or unhappy with a module.

Zheng and Ma’s question is practical: which levers are most likely to increase participation in teaching evaluations? They test three common approaches that map closely to real-world practice: incentives, messaging about impact, and attempts to reduce burden by shortening surveys. That makes the paper useful well beyond its immediate setting, because these are exactly the levers UK institutions already debate.

Key findings

Incentives mattered most. In the experiment, a grade-related bonus was the strongest lever for increasing students’ willingness to complete teaching evaluations. UK institutions may not want, or be able, to use grade incentives, but the finding still matters because it quantifies a broader point: participation rises when students see a tangible return on their time, not just a general appeal to goodwill.

Explaining that evaluations affect lecturers’ careers also increased participation, but less than incentives. This is a reminder that closing the loop in student voice initiatives is not only about accountability; it is also about motivation. Students need a credible story about what their feedback is for, how it is used, and why it is worth the effort to respond.

Shortening the evaluation did not have a significant effect on participation. If you are relying on survey length alone as your response-rate strategy, this is a useful caution. Reducing burden can still be good design, but it is unlikely to solve the participation problem by itself.

Who responds may change depending on the lever you pull. The authors discuss two mechanisms that can skew evaluation data: students may be more likely to respond when dissatisfied (a “vent” pattern), or when satisfied (a “return” pattern). That is crucial for UK HE teams, because improving participation is not just about volume. It is about avoiding a dataset that over-represents particular groups or sentiment.

“Vent effect”: students use SETs to express frustration; “Return effect”: students with positive experiences are more likely to participate.

Practical implications

For UK universities, three practical implications stand out.

First, treat participation as a design problem, not a reminder problem. If shortening evaluations does not move the needle, focus on making the request feel meaningful: explain purpose, show impact, and make the timing and channel easy (for example, in-class completion, mobile-friendly links, and clear time expectations). That gives students a stronger reason to respond and gives teams a better chance of collecting usable evidence, which aligns with wider findings on student motivations and perceptions in teaching evaluations.

Second, be explicit about the ethics of incentives. Even if grade incentives are off the table, you can still test other approaches such as prize draws, recognition for student reps, or programme-level “you said, we did” reporting. The principle is to respect student effort and make the value exchange clear, without creating pressure that undermines trust.

Third, measure representativeness alongside response rates. A higher response rate can still hide bias if certain groups are less likely to participate. This is where Student Voice Analytics can support the operational loop: if you increase participation, you should also see more stable theme coverage in open-text comments, fewer “thin” modules with too little qualitative data, and clearer benchmarking across cohorts. That helps teams judge not just whether more students responded, but whether the evidence is becoming safer to act on.

If you are trying to improve response rates without weakening the evidence, Student Voice Analytics helps universities analyse teaching evaluation comments at scale and spot where thin modules or uneven participation are limiting confidence. For a related read, start with who fills in student evaluations? to see how response bias can distort the picture even when response rates rise.

FAQ

Q: How can we improve module evaluation response rates without using grade incentives?

A: Start with credibility and convenience. Tell students what the evaluation is for, give one or two examples of changes made from past feedback, and make completion effortless (mobile-first, short estimated time, and a clear link in the VLE). Then test one change at a time, such as in-class completion windows or revised prompts, and track response by module and student group so you can see what actually moves participation.

Q: What should we be cautious about when applying a hypothetical scenario experiment to our own context?

A: Hypothetical choices do not always translate into real behaviour, and the study was conducted in a specific national context. Treat the findings as evidence about plausible mechanisms, then validate them locally by piloting changes and comparing both response rates and the quality of free-text comments across cycles.

Q: Does higher participation automatically make teaching evaluation data “better”?

A: Not automatically. Higher participation improves statistical stability, but what matters is whether the respondents are representative and whether the feedback is specific enough to act on. The most useful evaluation systems combine a healthy response rate with well-designed questions and robust analysis of free-text comments in module evaluation, so teams can move from “a score moved” to “we know what to fix next”.

References

[Paper Source]: Yue Zheng, Qian Ma "Motivating student participation in teaching evaluations: evidence from a hypothetical scenario experiment in China" DOI: 10.1007/s10734-025-01534-9

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.