QAA assessment literacy toolkit, aligning expectations to improve student feedback on assessment

Published Feb 28, 2026 · Updated Feb 28, 2026

On 12 February 2026, the Quality Assurance Agency for Higher Education (QAA) announced a new assessment literacy toolkit created through a QAA-funded Collaborative Enhancement Project led by Coventry University. We are highlighting it because assessment and feedback is a recurring driver of NSS and module evaluation comments, and institutions need practical ways to turn that student voice into clearer expectations and better outcomes. [QAA announcement]

What has changed in the assessment literacy toolkit

The QAA-funded project, titled Time and Effort on Task, has published a toolkit designed to help staff better understand students’ time and effort in assessment, and to support students to plan and complete assessments more effectively. The toolkit includes separate guides for students and for staff, and QAA describes it as an accessible, three-step guide.

The announcement also includes early signals about why this matters. Almost 40 per cent of students in the project work were not familiar with the term “assessment literacy”, while over 90 per cent of staff indicated a desire to develop their understanding of students and the need for effective support. In evaluation of the toolkit, over 85 per cent of students and 84 per cent of staff reported they could apply it in their learning and teaching.

QAA positions this as a practical response to an ongoing gap between staff and student expectations. As the project lead, Dr Christina Magkoufopoulou, puts it:

"What makes the toolkit unique is its focus on creating space for meaningful conversations about time and effort in assessment."

What this means for institutions collecting student feedback on assessment

First, treat assessment literacy as a student voice action area, not only a “study skills” topic. When students say assessment requirements are unclear, feedback is hard to use, or marking feels inconsistent, there is often an underlying expectations gap. A toolkit like this gives teams a structured way to make those expectations explicit and to check understanding early, before the same issues reappear in end-of-module feedback or NSS open text.

Second, link improvement work to the feedback you already collect. If your student comments show repeated friction points, for example unclear briefs, rubric confusion, or workload clustering, use a simple “measure, intervene, re-measure” loop: baseline the themes, roll out a small set of toolkit activities in priority modules, then check whether students’ language changes. This is especially important where assessment and feedback themes are persistent year-to-year.

Third, make it easy for staff to adopt. The most effective assessment literacy work is usually embedded in normal teaching, not bolted on. Consider packaging a small set of “minimum viable” steps for programme teams, for example a short briefing on expectations, an annotated exemplar, and a structured way for students to map time and effort to the marking criteria.

How student feedback analysis connects

At Student Voice AI, we see assessment and feedback themes as some of the highest-volume categories in open-text comments. Analysing comments at scale helps you separate issues that sound similar in meetings but behave differently in the data, for example unclear criteria versus feedback quality versus workload. That, in turn, helps you decide where an assessment literacy intervention is likely to move the needle, and where you need a different fix.

If you are building evidence from open text, start with a defensible workflow and stable language. Useful references are our NSS open-text analysis methodology, the student comment analysis governance checklist, and our student feedback analysis glossary. For a research view on student voice in assessment and feedback, see The current understanding of student voice in assessment and feedback and staff-student partnerships to enhance assessment literacy.

FAQ

Q: What should institutions do now?

A: Identify where assessment and feedback issues are most prominent in your student comments, then pilot a small set of assessment literacy activities in those modules. Keep it measurable, track themes before and after, and publish a short “you said, we did” update that closes the loop.

Q: When is the toolkit available, and who is it for?

A: QAA announced the toolkit on 12 February 2026. It is intended to support both students and staff, and it is positioned as a practical resource rather than a regulatory requirement.

Q: What is the broader implication for student voice?

A: Better assessment literacy can make student feedback more actionable. When students understand what good looks like and how marks are derived, their feedback tends to become more specific, which improves the quality of evidence for programme teams and quality processes.

References

[Quality Assurance Agency for Higher Education]: "QAA-funded CEP publishes toolkit for assessment literacy"
Published: 2026-02-12

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.