Students and educators prioritise different things in digital assessment quality

Updated Mar 21, 2026

At Student Voice AI, we often see assessment complaints harden because universities ask students about fairness, feedback, or digital tools only after the design work is already finished. That is why Jodi Huber, Amanda White and Andrew Brodzeli's paper in Studies in Higher Education, "Walking the tightrope of quality assessment: balancing perspectives and priorities of stakeholder groups", is useful. Drawing on students, educators, employers, accrediting bodies, and institutional policy-makers, it shows that assessment quality is not one thing, and that student experience has to be treated as part of design rather than a post-hoc satisfaction measure.

Context and research question

Digital assessment is now routine in higher education, and Generative AI has made the design problem more complicated. Universities need assessments that preserve academic integrity, generate useful feedback, feel fair to students, scale operationally, and still prepare graduates for professional life. Those goals do not always pull in the same direction.

Huber and colleagues ask how five stakeholder groups define quality digital assessment, where their priorities converge, and where they differ. The study used a mixed-methods design in Australian business education: 46 participants across interviews and focus groups, including 15 students, then a national survey with 201 respondents. The context is not UK-based, but the underlying problem is familiar to any university reviewing digital assessment, AI guidance, or student feedback on fairness and value.

Key findings

All five stakeholder groups broadly supported the main dimensions of assessment quality, but they also exposed two gaps in the original framework: purpose and technology. The paper argues that purpose needs to be explicit because stakeholders kept returning to the question of what an assessment is actually for, not just whether it is authentic or secure. Technology was also more than a delivery mechanism. It shaped access, implementation, integrity, and the student experience itself.

"The findings highlight the importance of balancing academic integrity, feedback quality, student experience, and authenticity in assessment design"

Students and educators did not weight the same elements equally. The clearest statistical difference for formative assessment was that students rated quality feedback more highly than educators. For summative assessment, educators rated purpose and academic integrity more highly than students, while students rated student experience more highly than employers. That matters because assessment redesign can easily drift towards compliance and risk management unless student experience is protected deliberately.

Authenticity mattered to everyone, but not in the same way. Educators tended to treat authentic assessment as a pedagogic strategy that can deepen learning and reduce misconduct risk. Students understood authenticity more pragmatically, as preparation for work and a way to demonstrate useful capabilities. Employers supported authentic assessment too, but some found the term itself ambiguous and preferred language about relevance and transferability. In other words, people may endorse the same principle while meaning different things by it.

Contextual constraints were not secondary. Available resourcing was the highest-rated contextual factor across stakeholder groups, and the study found statistically significant differences around institutional policies and assessment scale. The paper also surfaces a practical digital equity point that UK institutions should recognise immediately: universities cannot assume every student has the right technology, even as assessment becomes more digital and more entangled with AI-related expectations.

The broader conclusion is that assessment quality improves when universities create structured dialogue across groups rather than relying on staff-only design. The refined framework adds purpose and technology, but the more important message is procedural. Co-design and shared accountability help expose blind spots that single-group design can miss.

Practical implications

For UK higher education, the first implication is to treat student experience as a design criterion, not just as a survey outcome. When students later describe an assessment as unfair, unclear, low-value, or stressful, that often reflects trade-offs embedded in the original design. Assessment review groups should therefore examine purpose, feedback, integrity, authenticity, technology, and resourcing together, rather than reviewing them in separate silos.

The second implication is methodological. Universities should add targeted open-text prompts when reviewing assessment and feedback. If students rate an assessment poorly, ask what reduced quality: unclear purpose, weak feedback, technology friction, workload, authenticity, or integrity rules. This is where Student Voice Analytics fits naturally. Structured comment analysis can separate very different problems that headline scores flatten together.

Third, institutions should use multi-stakeholder review before major digital or AI policy changes. This paper suggests that one group's "good assessment" can be another group's burden or blind spot. In UK practice, programme teams, student representatives, quality staff, learning technologists, and where relevant professional bodies should review assessment changes together and be explicit about the trade-offs they are making.

FAQ

Q: How should a university use this framework when redesigning digital assessment?

A: Start with purpose. Decide what the task should evidence, then test whether feedback, student experience, academic integrity, authenticity, technology, and resourcing all support that purpose. Run a short student consultation before launch, then compare survey results with open-text comments after delivery so you can see which part of "quality" is actually failing.

Q: What should institutions be cautious about when applying this study to UK higher education?

A: The paper is grounded in Australian business education, so it is not a direct blueprint for every UK discipline. But the mixed-methods design and the pattern of stakeholder differences are highly transferable. UK institutions should test the framework locally, especially in disciplines with professional accreditation, high-volume assessment, or rapidly changing AI guidance.

Q: What does this change about how universities use student voice on assessment?

A: It shifts student voice from end-point reaction to design evidence. Instead of asking only whether students liked an assessment, universities can use surveys and open comments to test which dimensions of quality are holding up, where stakeholder priorities diverge, and what needs changing before dissatisfaction hardens into NSS or module-evaluation criticism.

References

[Paper Source]: Jodi Huber, Amanda White and Andrew Brodzeli "Walking the tightrope of quality assessment: balancing perspectives and priorities of stakeholder groups" DOI: 10.1080/03075079.2026.2619909

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.