Why assessment fairness does not feel the same to every student

Updated Mar 12, 2026

At Student Voice AI, we often see assessment fairness appear in open comments long before it becomes visible in a headline score. A new paper in Studies in Higher Education by K. Skylar Powell, Maria Kambouri and Panagiotis Rentzelas is useful because it shows that students do not approach fairness with one shared frame of reference. For UK universities using module evaluations, assessment surveys, and free-text comments to review the student experience, that matters.

Context and research question

Assessment fairness sits at the centre of student trust. When students feel marking is inconsistent, opaque, or overly comparative, they rarely describe that as a technical psychometric issue. They describe it as unfairness. The challenge for institutions is that a fairness score or a short complaint may combine several different judgements: whether the rules were clear, whether the outcome felt deserved, whether peers were treated similarly, or whether the system itself aligned with a student's expectations.

Powell, Kambouri and Rentzelas examine that problem through two psychological variables: self-construal, meaning whether students understand themselves more independently or more relationally, and self-esteem. Using a learning task based on the Wisconsin Card Sorting Test, they studied 214 undergraduates from the United States and South Korea. Students were asked about the fairness of absolute and relative assessment procedures, then about manipulated assessment outcomes, to test whether personal orientation changes how fairness is judged.

Key findings

The first important point is that assessment fairness is not only about the formal procedure. As the authors note, learner perceptions of fairness can shape motivation and learning. That makes fairness comments especially important in institutional feedback systems, because they are often early signals of disengagement rather than just expressions of frustration.

The study found that self-construal on its own did not straightforwardly predict how students judged assessment procedures in the abstract. In other words, universities should be cautious about assuming that students simply prefer one assessment model over another in a stable, universal way. Fairness is not just a fixed attitude towards absolute or relative grading.

What mattered more was the interaction between self-construal and self-esteem when students were asked to judge specific outcomes. For students with a more interdependent orientation, higher self-esteem was associated with judging lower assessment outcomes as fairer. For students with a more independent orientation, the pattern moved in the opposite direction. That is a useful reminder that dissatisfaction with a mark and dissatisfaction with a process are related, but not identical.

"perceptions of fairness may differ depending upon learner self-identities."

For UK higher education teams, the practical message is that an aggregate fairness score can hide very different underlying reasons. Two students might both say an assessment was unfair, while one is reacting to unclear criteria, another to comparative ranking, and a third to how the result sits with their own academic identity. This is exactly why open-text feedback matters. Comments explain which part of fairness is breaking down.

The paper also reinforces a broader lesson for student voice work: fairness is interpretive, not purely administrative. Institutions can moderate, standardise, and explain an assessment carefully, and still find that students judge the experience differently. That does not mean fairness is subjective in the trivial sense. It means universities need richer evidence than a single scale item if they want to understand where mistrust is coming from.

Practical implications

First, universities should separate procedural fairness from outcome fairness in their surveys and module evaluations. One item might ask whether criteria and marking processes were clear and consistently applied. A separate open-text prompt should ask what, specifically, felt fair or unfair about the assessment. That distinction reduces the risk of collapsing several judgements into one metric.

Second, teams should analyse fairness comments thematically rather than treating them as generic dissatisfaction. Themes such as criteria clarity, consistency, workload, comparative grading, moderation, and feedback usefulness should be tracked separately. For Student Voice Analytics, this is a strong use case: fairness-related comments can be categorised and benchmarked at scale so institutions can see whether complaints are clustering around design, communication, or marking practice.

Third, institutions should be careful when interpreting subgroup differences in assessment feedback. International cohorts, widening participation groups, and students with different educational histories may not share the same assumptions about what fair assessment looks like. If fairness scores or comments diverge sharply between groups, the right response is investigation, not dismissal.

FAQ

Q: How should a university redesign its assessment fairness questions after reading this paper?

A: Split the issue into at least two parts. Ask one question about whether the assessment process was clear and consistently applied, and use an open-text question to ask what made the assessment feel fair or unfair. That gives teams evidence on both process and perception.

Q: What are the methodological limits of this study for real university assessment?

A: The study uses a controlled learning task with 214 undergraduates from two national contexts, the United States and South Korea, rather than live module assessment data. That makes it useful for identifying mechanisms, but not for estimating how often the same patterns appear in a specific institution. UK universities should treat it as a prompt to test their own survey and comment data, not as a final answer.

Q: What does this change about how we use student voice on assessment and feedback?

A: It strengthens the case for combining scores with comments. A fairness rating tells you that something is wrong, but not whether the issue is criteria, communication, comparability, timing, or the emotional impact of the result itself. Student voice becomes more actionable when institutions analyse those distinctions systematically.

References

[Paper Source]: K. Skylar Powell, Maria Kambouri, Panagiotis Rentzelas "It’s (un)fair! undergraduate student self-construals, self-esteem, and perceptions of summative assessment fairness" DOI: 10.1080/03075079.2026.2637821

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.