Updated Apr 09, 2026
If students only tell you what is unclear about assessment after a module ends, the chance to fix it for that cohort has usually gone. The QAA’s new assessment literacy toolkit matters because it gives universities a practical way to align expectations earlier and reduce the assessment and feedback concerns that keep surfacing in NSS and module evaluation comments. On 12 February 2026, the Quality Assurance Agency for Higher Education (QAA) announced the toolkit through a QAA-funded Collaborative Enhancement Project led by Coventry University. [QAA announcement]
The QAA-funded project, titled Time and Effort on Task, has published a toolkit designed to help staff better understand how students experience time and effort in assessment, and to help students plan and complete assessments more effectively. The toolkit includes separate guides for students and for staff, and QAA describes it as an accessible, three-step guide. For institutions, that makes it easier to build a shared language around assessment expectations instead of assuming staff and students already mean the same thing.
The announcement also includes early signals about why this matters. Almost 40 per cent of students in the project work were not familiar with the term “assessment literacy”, while over 90 per cent of staff indicated a desire to develop their understanding of students and the need for effective support. In evaluation of the toolkit, over 85 per cent of students and 84 per cent of staff reported they could apply it in their learning and teaching. The gap is real, but the early results suggest it is practical to address.
QAA positions this as a practical response to an ongoing gap between staff and student expectations. As the project lead, Dr Christina Magkoufopoulou, puts it:
"What makes the toolkit unique is its focus on creating space for meaningful conversations about time and effort in assessment."
First, treat assessment literacy as a student voice action area in assessment design, not only a “study skills” topic. When students say assessment requirements are unclear, feedback is hard to use, or marking feels inconsistent, there is often an underlying expectations gap. A toolkit like this gives teams a structured way to make those expectations explicit and to check understanding early, before the same issues reappear in end-of-module feedback or NSS open text. The benefit is better guidance for students and clearer evidence for teams deciding what to fix.
Second, link improvement work to the feedback you already collect. If your student comments show repeated friction points, for example unclear briefs, rubric confusion, or workload clustering, use a simple in-term measure, intervene, re-measure loop: baseline the themes, roll out a small set of toolkit activities in priority modules, then check whether students’ language changes. This is especially important where assessment and feedback themes stay persistent from one year to the next. The payoff is that teams can show whether an intervention improved the student experience, not just whether it was introduced.
Third, make it easy for staff to adopt. The most effective assessment literacy work is usually embedded in normal teaching, not bolted on. Consider packaging a small set of “minimum viable” steps for programme teams, for example a short briefing on expectations, an annotated exemplar, and a structured way for students to map time and effort to the marking criteria and standards they are aiming for. That lowers the barrier to action and makes improvement more consistent across modules.
At Student Voice AI, we see assessment and feedback themes as some of the highest-volume categories in open-text comments. Analysing comments at scale helps you separate issues that sound similar in meetings but behave differently in the data, for example unclear criteria, feedback quality, and workload. That helps you decide where an assessment literacy intervention is likely to make the biggest difference, and where you need a different fix first.
If assessment and feedback keep resurfacing in your comments, see how Student Voice Analytics helps teams separate unclear criteria, feedback quality, and workload with one reproducible method. Then build your evidence base with our NSS open-text analysis methodology, the student comment analysis governance checklist, and our student feedback analysis glossary. For a research view on student voice in assessment and feedback, see The current understanding of student voice in assessment and feedback and staff-student partnerships to enhance assessment literacy.
Q: What should institutions do now?
A: Identify where assessment and feedback issues are most prominent in your student comments, then pilot a small set of assessment literacy activities in those modules. Keep it measurable, track themes before and after, and publish a short “you said, we did” update that closes the loop. That gives students a visible reason to trust the process and gives teams a clearer basis for the next round of improvement.
Q: When is the toolkit available, and who is it for?
A: QAA announced the toolkit on 12 February 2026. It is intended to support both students and staff, and it is positioned as a practical resource rather than a regulatory requirement.
Q: What is the broader implication for student voice?
A: Better assessment literacy can make student feedback more actionable. When students understand what good looks like and how marks are derived, their feedback tends to become more specific. That improves the quality of evidence for programme teams and makes quality processes easier to defend.
[Quality Assurance Agency for Higher Education]: "QAA-funded CEP publishes toolkit for assessment literacy"
Published: 2026-02-12
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.