Who Actually Fills In Student Evaluations? New Evidence on Non-Response Bias

Published Feb 17, 2026 · Updated Feb 17, 2026

At Student Voice AI, we spend a lot of time thinking about what student comments tell universities — but an equally important question is whose comments are missing. A new paper in Studies in Higher Education by Erica De Bruin, Ann L. Owen and Stephen Wu tackles that question head-on with a randomised experiment testing whether changes to the way teaching evaluations are solicited can produce a more representative picture of the student body. The findings have direct implications for any institution that relies on student evaluations to inform teaching quality decisions.

The problem: who is not responding?

Student evaluations of teaching (SETs) are a near-universal feature of higher education. They feed into promotion cases, module reviews, and quality-assurance reports. Yet since the shift to online administration, response rates have dropped — with reported figures typically ranging from 30 to 75 per cent. The question is not just how many students respond, but which students. If certain demographic groups are systematically less likely to complete evaluations, the resulting data may misrepresent the student experience.

De Bruin, Owen and Wu's study is one of the first to use a randomised controlled experiment to test whether practical changes to the evaluation process can reduce this non-response bias. Conducted at a selective liberal arts college in the United States, the experiment assigned students to one of three conditions: a traditional end-of-semester evaluation, an alternative prompt asking students to articulate their own criteria for effective teaching, and a delayed solicitation sent at the start of the following semester.

Key findings

Certain student groups are consistently under-represented. Across all study conditions, Pell Grant recipients (a proxy for lower-income students), students with low GPAs, and those later in their college careers were significantly less likely to complete teaching evaluations. They were also less likely to write more than three words in response to qualitative questions. This means the voices of some of the students who may have the most to say about their educational experience are the least likely to appear in evaluation data.

Racial disparities are most pronounced in traditional evaluations. Black, Hispanic and multi-racial students were less likely than white students to complete evaluations solicited at the standard end-of-semester point. This finding is especially significant for institutions using SET data to assess teaching quality: if minority students are disproportionately absent from the data, conclusions about teaching effectiveness are drawn from an incomplete and potentially skewed sample.

"Black, Hispanic, and multi-racial students are less likely than white students to complete traditional evaluations solicited at the end-of-semester."

An alternative prompt increases qualitative engagement. When students were asked to define their own criteria for effective teaching — rather than respond to the standard institutional questionnaire — they were more likely to write more than three words and produced longer responses overall. This suggests that the way a question is framed matters: giving students agency over the evaluation criteria encourages richer, more substantive feedback.

Delayed solicitation creates a more racially representative sample — but at a cost. Sending evaluations at the start of the following semester reduced overall response rates, particularly among graduating students. However, this decrease was smaller for Black, Hispanic and multi-racial students than for white students, resulting in a more racially balanced sample. The authors describe this as a potential tradeoff between overall response rates and the representation of minority student voices.

Practical implications for UK higher education

While this study was conducted at a US institution, the underlying dynamics are highly relevant to the UK context. The NSS, PTES, PRES, UKES and institutional module evaluations all face the same challenge: response rates are falling, and there is growing recognition that the students who respond may not be representative of the student body as a whole.

Rethink evaluation prompts. The finding that an alternative, student-centred prompt increases qualitative engagement is directly actionable. Institutions designing module evaluation forms or free-text questions for the NSS could experiment with prompts that invite students to define what matters to them, rather than only responding to pre-set criteria. This may be particularly effective in eliciting richer commentary from groups that traditionally under-engage with evaluations.

Consider who is missing. Student Experience teams and Pro-Vice-Chancellors for Education should routinely audit evaluation response data by demographic group — not just by module or department. If certain cohorts are under-represented, the resulting data may be driving decisions that do not reflect the full range of student experience. This is especially important when evaluation data feeds into staff review or quality-enhancement processes.

Weigh the response-rate-versus-representativeness tradeoff. The finding that delayed solicitation reduces response rates but improves demographic balance presents a genuine policy tension. Institutions may need to decide whether a slightly lower overall response rate is an acceptable price for a more inclusive dataset — or explore hybrid approaches that combine end-of-semester collection with targeted follow-up for under-represented groups.

FAQ

Q: How can universities apply these findings when they cannot easily replicate a randomised experiment?

A: Institutions do not need to run a formal experiment to act on these findings. A practical first step is to analyse existing evaluation data by student demographics — ethnicity, socioeconomic background, year of study — to identify which groups are under-represented. Many student records systems already hold this information. Once gaps are identified, universities can trial alternative prompts or follow-up reminders targeted at low-responding groups and compare the results across cycles. Even small-scale pilots within individual faculties can generate useful evidence about what works locally.

Q: Does this research suggest that student evaluations are fundamentally unreliable?

A: Not unreliable — but incomplete. The paper does not argue against using student evaluations; it argues for making them better. The core message is that institutions should treat evaluation data with the same methodological rigour they would apply to any survey instrument: checking for non-response bias, considering whose voices are absent, and triangulating with other data sources such as free-text comment analysis. When the qualitative data is analysed at scale, it can reveal themes and concerns that Likert-scale averages alone cannot capture.

Q: How does this connect to broader efforts around equality, diversity and inclusion in UK higher education?

A: If evaluation systems systematically under-represent the views of students from minority ethnic backgrounds or lower socioeconomic groups, then decisions made on the basis of that data risk perpetuating inequities. The OfS and institutions themselves have placed increasing emphasis on reducing awarding gaps and improving outcomes for under-represented students. Ensuring that student voice mechanisms genuinely capture diverse perspectives is a necessary part of that effort — not just an administrative detail, but a matter of institutional equity.

References

[Paper Source]: Erica De Bruin, Ann L. Owen and Stephen Wu "Can student evaluations be made more representative? Testing alternative strategies" DOI: 10.1080/03075079.2025.2467424

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.