Students and educators prioritise different things in digital assessment quality

Updated Apr 06, 2026

At Student Voice AI, we often see assessment complaints harden because universities ask students about fairness, feedback, or digital tools only after the design work is finished. By that point, the main trade-offs are already locked in. That is why Jodi Huber, Amanda White and Andrew Brodzeli's paper in Studies in Higher Education, "Walking the tightrope of quality assessment: balancing perspectives and priorities of stakeholder groups", matters. Drawing on students, educators, employers, accrediting bodies, and institutional policy-makers, it shows that assessment quality is not one thing. Student experience has to be built into assessment design, not treated as a post-hoc satisfaction measure.

Context and research question

Digital assessment is now routine in higher education, and Generative AI has made the design problem more complicated, especially where institutions are already weighing AI equity challenges and opportunities in higher education. Universities need assessments that preserve academic integrity, generate useful feedback, feel fair to students, scale operationally, and still prepare graduates for professional life. Those goals do not always pull in the same direction, which makes a clear definition of quality essential.

Huber and colleagues ask how five stakeholder groups define quality digital assessment, where their priorities converge, and where they differ. The study used a mixed-methods design in Australian business education: interviews and focus groups with 46 participants, including 15 students, followed by a national survey with 201 respondents. The setting is not UK-based, but the problem is familiar to any university reviewing digital assessment, AI guidance, or student feedback on fairness and value.

Key findings

All five stakeholder groups recognised the core dimensions of assessment quality, but they also identified two missing elements: purpose and technology. The paper argues that purpose needs to be explicit because stakeholders kept returning to the question of what an assessment is actually for, not just whether it is authentic or secure. Technology was also more than a delivery mechanism. It shaped access, implementation, integrity, and the student experience itself. For UK teams, that is a reminder that assessment quality cannot be separated from the systems that deliver it.

"The findings highlight the importance of balancing academic integrity, feedback quality, student experience, and authenticity in assessment design"

Students and educators did not prioritise the same elements equally. For formative assessment, students rated quality feedback more highly than educators, which fits the wider pattern described in the disconnect on what makes good feedback. For summative assessment, educators rated purpose and academic integrity more highly than students, while students rated student experience more highly than employers. That matters because assessment redesign can drift towards compliance and risk management unless student experience is protected deliberately.

Authenticity mattered to everyone, but not in the same way. Educators tended to treat authentic assessment as a pedagogic strategy that can deepen learning and reduce misconduct risk. Students understood authenticity more pragmatically, as preparation for work and a way to demonstrate useful capabilities. Employers supported authentic assessment too, but some found the term ambiguous and preferred language about relevance and transferability. The shared label hides different expectations, so institutions need to define authenticity clearly before assuming agreement.

Contextual constraints were not secondary. Available resourcing was the highest-rated contextual factor across stakeholder groups, and the study found statistically significant differences around institutional policies and assessment scale. The paper also surfaces a practical digital equity point that UK institutions should recognise immediately: universities cannot assume every student has the right technology, even as assessment becomes more digital and more entangled with AI-related expectations. In practice, quality depends as much on feasibility and access as on design intent.

The broader conclusion is that assessment quality improves when universities create structured dialogue across groups rather than relying on staff-only design. The refined framework adds purpose and technology, but the more important message is procedural. Co-design and shared accountability help expose blind spots that single-group design can miss. The benefit is not just better consultation. It is fewer avoidable assessment problems reaching students in the first place.

Practical implications

For UK higher education, the first implication is to treat student experience as a design criterion, not just as a survey outcome, which is central to student voice in assessment and feedback. When students later describe an assessment as unfair, unclear, low-value, or stressful, that often reflects trade-offs embedded in the original design. Assessment review groups should therefore examine purpose, feedback, integrity, authenticity, technology, and resourcing together, rather than in separate silos. That gives teams a better chance of fixing design weaknesses before dissatisfaction shows up in module evaluations or the NSS.

The second implication is methodological. Universities should add targeted open-text prompts when reviewing assessment and feedback. If students rate an assessment poorly, ask what reduced quality: unclear purpose, weak feedback, technology friction, workload, authenticity, or integrity rules. This is where Student Voice Analytics fits naturally. Structured comment analysis can separate very different problems that headline scores flatten together, so teams know what to change first.

Third, institutions should use multi-stakeholder review before major digital or AI policy changes. This paper suggests that one group's "good assessment" can be another group's burden or blind spot. In UK practice, programme teams, student representatives, quality staff, learning technologists, and where relevant professional bodies should review assessment changes together and be explicit about the trade-offs they are making. That reduces the risk of solving academic integrity problems in online assessment while creating a feedback, workload, or access problem elsewhere.

FAQ

Q: How should a university use this framework when redesigning digital assessment?

A: Start with purpose. Decide what the task should evidence, then test whether feedback, student experience, academic integrity, authenticity, technology, and resourcing all support that purpose. Run a short student consultation before launch, then compare survey results with open-text comments after delivery so you can see which part of "quality" is actually failing.

Q: What should institutions be cautious about when applying this study to UK higher education?

A: The paper is grounded in Australian business education, so it is not a direct blueprint for every UK discipline. But the mixed-methods design and the pattern of stakeholder differences are highly transferable. UK institutions should test the framework locally, especially in disciplines with professional accreditation, high-volume assessment, or rapidly changing AI guidance.

Q: What does this change about how universities use student voice on assessment?

A: It shifts student voice from end-point reaction to design evidence. Instead of asking only whether students liked an assessment, universities can use surveys and open comments to identify which dimensions of quality are holding up, where stakeholder priorities diverge, and what needs changing before dissatisfaction hardens into NSS or module-evaluation criticism.

References

[Paper Source]: Jodi Huber, Amanda White and Andrew Brodzeli "Walking the tightrope of quality assessment: balancing perspectives and priorities of stakeholder groups" DOI: 10.1080/03075079.2026.2619909

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.