Why assessment fairness does not feel the same to every student

Updated Apr 06, 2026

At Student Voice AI, we often see assessment fairness surface in open comments before it shifts a headline score. A new paper in Studies in Higher Education by K. Skylar Powell, Maria Kambouri and Panagiotis Rentzelas matters because it shows that students do not judge fairness through one shared lens. For UK universities using module evaluations designed with students and staff, assessment surveys, and free-text comments, that is a useful warning: one fairness score can conceal several very different problems.

Context and research question

Assessment fairness sits close to student trust. When marking feels inconsistent, opaque, or overly comparative, students rarely describe that as a technical psychometric issue. They describe it as unfairness. For institutions, the challenge is that a fairness score or a short complaint can bundle together several judgements: whether the rules were clear, whether the outcome felt deserved, whether peers were treated consistently, and whether the system aligned with a student's expectations.

Powell, Kambouri and Rentzelas examine that problem through two psychological variables: self-construal, meaning whether students understand themselves more independently or more relationally, and self-esteem. Using a learning task based on the Wisconsin Card Sorting Test, they studied 214 undergraduates from the United States and South Korea. Students were asked about the fairness of absolute and relative assessment procedures, then about manipulated assessment outcomes. That design helps explain not just whether students say something is fair, but why similar outcomes can land differently.

Key findings

The first takeaway is that assessment fairness is not only about the formal procedure. As the authors note, learner perceptions of fairness can shape motivation and learning. For institutional feedback teams, that means fairness comments can be early warning signs of disengagement, not just expressions of frustration.

The study did not find a simple relationship between self-construal alone and how students judged assessment procedures in the abstract. Universities should be cautious about assuming that students naturally line up behind one assessment model or another. Fairness is not a fixed attitude towards absolute or relative grading.

What mattered more was the interaction between self-construal and self-esteem when students judged specific outcomes. For students with a more interdependent orientation, higher self-esteem was associated with judging lower assessment outcomes as fairer. For students with a more independent orientation, the pattern moved in the opposite direction. That distinction matters because dissatisfaction with a mark and dissatisfaction with a process can overlap without being the same judgement.

"perceptions of fairness may differ depending upon learner self-identities."

For UK higher education teams, the practical message is clear: an aggregate fairness score can hide different causes. One student may be reacting to unclear criteria, another to comparative ranking, and another to how the result fits with their academic identity. This is exactly why student voice in assessment and feedback matters. Comments reveal which part of fairness is actually breaking down.

The paper also reinforces a broader lesson for student voice work: fairness is interpretive, not purely administrative. Institutions can moderate, standardise, and explain an assessment carefully, yet students may still judge the experience differently. That does not make the feedback trivial. It means universities need richer evidence than a single scale item if they want to understand where mistrust starts.

Practical implications

First, universities should separate procedural fairness from outcome fairness in surveys and module evaluations. Ask one question about whether criteria and marking processes were clear and consistently applied. Then use a separate open-text prompt to ask what, specifically, felt fair or unfair about the assessment. That simple split gives teams cleaner evidence and reduces the risk of collapsing several judgements into one metric.

Second, teams should analyse fairness comments thematically rather than treating them as generic dissatisfaction. Track criteria clarity, consistency, workload, comparative grading, moderation, and feedback usefulness separately. That makes action more precise. Teams can see whether they need to redesign the assessment, explain it better, or improve marking practice. For Student Voice Analytics, this is a strong use case because fairness-related comments can be categorised and benchmarked at scale.

Third, institutions should be careful when interpreting subgroup differences in assessment feedback. International cohorts, widening participation groups, and students with different educational histories may not share the same assumptions about what fair assessment looks like. If fairness scores or comments diverge sharply between groups, the right response is investigation, not dismissal. That is where segmented comment analysis becomes more useful than a single institutional average.

FAQ

Q: How should a university redesign its assessment fairness questions after reading this paper?

A: Split the issue into at least two parts. Ask one question about whether the assessment process was clear and consistently applied. Then use an open-text question to ask what made the assessment feel fair or unfair. That gives teams evidence on both process quality and student interpretation.

Q: What are the methodological limits of this study for real university assessment?

A: The study uses a controlled learning task with 214 undergraduates from two national contexts, the United States and South Korea, rather than live module assessment data. That makes it useful for identifying mechanisms, but not for estimating how often the same patterns appear in a specific institution. UK universities should use it to test their own survey and comment data, not treat it as a final answer.

Q: What does this change about how we use student voice on assessment and feedback?

A: It strengthens the case for combining scores with comments. A fairness rating tells you that something is wrong, but not whether the issue is criteria, communication, comparability, timing, or how students receive feedback once it appears. Student voice becomes more actionable when institutions can separate those causes and track them systematically.

References

[Paper Source]: K. Skylar Powell, Maria Kambouri, Panagiotis Rentzelas "It’s (un)fair! undergraduate student self-construals, self-esteem, and perceptions of summative assessment fairness" DOI: 10.1080/03075079.2026.2637821

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.