Why neutral students stay silent in teaching evaluations

Updated Apr 17, 2026

The usual question about teaching evaluations is how to get more students to complete them. Anne Mesny and Line Dubé's Assessment & Evaluation in Higher Education paper, "Students’ ‘fast and frugal’ heuristics in SET completion: a preliminary typology", suggests a better question: which students think their feedback is worth giving at all? For universities using student voice to guide module enhancement, that shift matters. If students with middling experiences stay silent, the evidence can lean too heavily towards the strongly pleased and the strongly frustrated.

Context and research question

The move from paper SETs to online surveys solved one administrative problem and created another. Response rates fell, often sharply, and the literature has spent years testing reminders, incentives, and in-class completion windows to recover participation. Yet the evidence on what works is mixed, which hints at a deeper issue: students are not all making the same decision for the same reason.

Mesny and Dubé address that issue through a qualitative study at a major Canadian business school, where SETs include both Likert-scale items and open-ended questions and are used for teaching improvement as well as faculty career decisions. They conducted 39 semi-structured interviews across two collection periods in 2023 and 2024 to understand how students decide whether to complete end-of-semester teaching evaluations. The value of the paper is that it moves away from treating response behaviour as a simple yes-or-no outcome and instead examines the decision rules students actually use.

Key findings

The paper's central finding is that SET completion is not binary. The authors identify seven decision profiles, ranging from students who complete every evaluation to those who never do, with several conditional patterns in between. Some students respond out of habit or principle. Others respond only when a specific cue, such as protected class time, makes completion easy. Still others decide based on whether the course experience felt notably good or notably bad.

In-class time matters, but it is not a universal fix. For some students, a protected window during class is enough to trigger completion. For others, it makes little difference because they were already going to respond, or were already disengaged. The paper also notes that timing within the class matters. A slot at the very end can be easy to ignore if students are ready to leave. The practical lesson is that the familiar advice to "give students time in class" is only one lever, not a complete response strategy.

The most important blind spot is the silent middle. Many students in the study said they complete SETs only when they had something clearly positive or clearly negative to say. A neutral, ordinary, or "good-enough" course experience often felt to them as if it was not worth reporting. That matters because it means non-response cannot safely be read as satisfaction, indifference, or approval.

"silence becomes radically ambiguous"

The paper also challenges the idea of a large fixed group of non-responders. Genuine never-responders appeared to be relatively rare in this sample. Many students participated intermittently, depending on time, course experience, perceived usefulness, or simple convenience. Several students had also shifted their heuristic over time, which suggests that response habits are not fixed traits. Universities can shape them, for better or worse.

Higher response rates do not automatically mean higher-quality evidence. Some of the students who always completed SETs described doing the closed-ended items quickly and with limited reflection. By contrast, students with the strongest experiences sometimes provided the richest comments, but only selectively. That creates a methodological tension: pushing volume alone may increase noise as much as insight. For UK teams, the real objective is not just more responses, but feedback that is representative enough and specific enough to support action.

Practical implications

The first implication for UK higher education teams is to design for the silent middle, not only for the extremes. If students think neutral feedback has no value, institutions should say otherwise explicitly. Evaluation prompts can ask what worked adequately but could be better, what should stay the same, and what small changes would most improve the module. That reduces the risk that ordinary but important experience disappears from the dataset, which gives teams a more balanced basis for improvement.

Second, universities should treat in-class completion time as a targeted intervention rather than a magic answer. The paper supports what many survey leads already suspect: protected time can help, but not for everyone and not in every format. It is most useful when paired with clear framing about why the feedback matters and when placed at a point in the session where students are still attentive. That fits closely with existing evidence on what gets students to fill in teaching evaluations. The benefit is a response process built around actual student behaviour rather than institutional habit.

Third, institutions should monitor whose silence is shaping the evidence. This paper is about heuristics rather than demographic bias, but the implication connects directly to wider work on who fills in student evaluations and where non-response bias appears. If neutral voices, late-year students, or particular groups are missing, a module may look more polarised than it really is. Student Voice Analytics is relevant here because it helps universities compare theme coverage and comment volume across modules and cohorts, making it easier to see where thin or skewed qualitative evidence needs qualification before leaders act on it.

Finally, universities should treat survey participation and comment analysis as one evidence system. If response behaviour is shaped by simple heuristics, then open-text analysis needs a defensible method that takes thin samples and missing middles seriously. That is where a stable workflow such as our NSS open-text analysis methodology becomes useful. The benefit is better judgement about when a pattern is strong enough to act on, and when it is still too partial to support confident decisions.

FAQ

Q: How should a UK university apply this paper when reviewing its module evaluation process?

A: Start by testing whether your current process mostly captures the extremes. Review a sample of modules with average scores and check whether comment volumes are thin or disproportionately negative and positive. Then pilot three small changes together: a well-timed in-class completion window, clearer messaging that "ordinary" feedback is useful, and one open-text prompt that invites students to describe what should be improved as well as what should be retained. That gives teams a practical way to surface the silent middle before redesigning the whole instrument.

Q: What are the methodological limits of this study?

A: This is a qualitative study from one Canadian business school, based on 39 interviews. Its strength is explanation, not prevalence. It tells us how students describe their decision rules, not how common each profile is across the sector. UK institutions should therefore use it as a diagnostic framework for local testing, not as a definitive estimate of how every student population behaves.

Q: What does this change about student voice work more broadly?

A: It reinforces that student voice is not only about collecting more responses. It is also about understanding what non-response means. If silence can reflect neutrality, time pressure, weak trust, or uncertainty about usefulness, then institutions need to interpret survey data more carefully and connect scores with comments, response patterns, and local context. That makes student voice more methodologically credible and more useful for quality enhancement.

References

[Paper Source]: Anne Mesny, Line Dubé "Students’ ‘fast and frugal’ heuristics in SET completion: a preliminary typology" DOI: 10.1080/02602938.2026.2656291

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.