Students judge feedback comments as fairer when they are usable

Updated Mar 24, 2026

At Student Voice AI, we spend a lot of time helping universities interpret complaints about feedback that sound vague on the surface: unfair, generic, harsh, unhelpful. A recent paper in Assessment & Evaluation in Higher Education by David Playfoot, Ruth Horry and Aimee E. Pink is useful because it isolates one part of that problem. In "Fairly useful feedback: characteristics of feedback comments perceived as fair by students", the authors ask what kinds of written feedback comments students actually read as fair. For UK universities using module evaluations, NSS-style feedback questions, and open-text comments to review assessment practice, that is a highly practical question.

Context and research question

Universities often respond to feedback concerns by trying to improve tone, speed, or consistency. All of those matter, but they do not answer a more basic question: what does fairness in feedback look like from the student perspective? Students frequently describe feedback as unfair, yet that judgement can reflect several things at once, including the grade awarded, the wording of the comment, whether next steps are clear, and whether the student feels respected.

Playfoot, Horry and Pink focus specifically on written comments. Across two experiments at a UK university, they presented second-year psychology students with fictional assignment extracts and attached feedback comments that varied in two ways: how usable they were, meaning constructive and actionable, and how nice they were, meaning supportive in tone. Experiment 1 involved 127 students and tested whether fairness changed when a grade was present or absent. Experiment 2 involved 151 students and tested whether fairness changed when the attached grade was more generous or less generous than the standard of the work appeared to deserve.

For UK higher education teams, that makes the paper unusually useful. It does not only ask whether students like positive wording. It asks whether the phrasing of feedback comments changes how fair the whole experience feels, even when the underlying work is held constant.

Key findings

The strongest finding is that usability mattered much more than niceness. In Experiment 1, comments rated as more usable were judged significantly fairer than comments that were less usable, with a large effect size (F(1,140) = 44.94, p < 0.001, ηp² = 0.24). Niceness also had a statistically significant effect, but it was much smaller (F(1,140) = 4.08, p = 0.045, ηp² = 0.03). In other words, students noticed supportive wording, but they responded much more strongly to whether the comment helped them improve.

"Comments that were usable were considered fairer."

That pattern held up when the grade context became more demanding. In Experiment 2, the authors manipulated whether the attached mark was a full classification higher or lower than the apparent standard of the work. Even then, usability still had a strong effect on fairness perceptions (F(1,143) = 33.92, p < 0.001, ηp² = 0.19), while niceness had no significant effect. This matters because it suggests students are not judging fairness purely through interpersonal warmth. They are asking whether the comment gives them a credible route forward.

The grade itself mattered less than many institutions assume. In Experiment 1, simply adding or removing a grade did not significantly change fairness ratings. In Experiment 2, grade generosity also produced no significant main effects or interactions. That does not mean grades are irrelevant to the student experience, but it does suggest that when students judge the fairness of feedback comments, the content of the comments can outweigh the mark context more than staff expect.

There is also a useful nuance in the paper's discussion. Being nice does not automatically make feedback feel fair if it does not help the student do better next time. Students appear to treat fairness as connected to purpose. If feedback exists to support learning, then comments that are vague, padded, or emotionally gentle but lacking direction can still feel unfair because they fail to do the job feedback is supposed to do.

For Student Experience and Market Insights teams, that is important because it aligns with what often appears in survey comments. Students do not always ask for softer wording. More often, they ask for feedback that is specific, justified, and usable. The paper gives that pattern a stronger evidential base.

Practical implications

The first implication is that universities should review feedback quality through the lens of actionability, not only tone. Marker development should prioritise comments that explain what was done well, where the work fell short, and what the student could do differently next time. Supportive tone still matters, but it is not a substitute for usable information.

Second, institutions should be cautious about interpreting fairness complaints too narrowly as disputes about marks. This paper suggests that students may describe feedback as unfair even when the issue is not the grade itself, but the lack of clear, constructive guidance around it. Module evaluations and pulse surveys should therefore separate questions about the mark awarded from questions about whether comments were specific, respectful, and useful.

Third, this is exactly where open-text analysis becomes valuable. If students repeatedly describe feedback as vague, generic, contradictory, or impossible to act on, universities need a way to surface that pattern consistently across modules and cohorts. Student Voice Analytics can help teams categorise feedback-related comments at scale, distinguish tone problems from usability problems, and track whether interventions are improving the student experience in the places where fairness concerns are clustering.

FAQ

Q: How can universities make written feedback feel fairer without simply telling staff to be more positive?

A: The most defensible change is to make comments more usable. Feedback should identify the issue clearly, explain why it matters, and point to a concrete next step. Staff can still write respectfully and supportively, but the evidence from this paper suggests that fairness perceptions rise most when students can see how to improve.

Q: What are the main methodological limits of this study?

A: The study used fictional assignment extracts and students from one UK psychology programme, so it does not replicate the full emotional stakes of receiving feedback on one's own assessed work. It is nevertheless useful because the experimental design isolates the effect of comment phrasing more cleanly than most survey-based studies can.

Q: What does this mean for student voice work more broadly?

A: It suggests that fairness in student feedback data should be unpacked, not treated as a single sentiment label. When students say feedback is unfair, institutions need to know whether they mean the mark, the criteria, the consistency of judgement, or the usability of the comments. Open-text analysis is essential for making that distinction visible.

References

[Paper Source]: David Playfoot, Ruth Horry and Aimee E. Pink "Fairly useful feedback: characteristics of feedback comments perceived as fair by students" DOI: 10.1080/02602938.2025.2586836

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.