Updated Apr 03, 2026
On 26 March 2026, the Quality Assurance Agency for Higher Education (QAA) published Roadshow draws together diverse approaches to assessment, its summary of the March 2026 QAA Assessment & Feedback Roadshow. For Student Experience teams, PVCs, and quality professionals, that matters because the summary brings together three issues that now recur across module evaluations, NSS comments, and local surveys: AI-resilient assessment design, assessment literacy, and faster, clearer feedback. At Student Voice AI, we see those themes appearing together often enough that this is more than an academic development update. It is a practical signal about how institutions are being expected to collect, interpret, and act on student voice.
The summary is UK-wide in scope, but it is not a new regulatory requirement. QAA describes the Roadshow as four days of open events drawing examples from institutions across England, Scotland, and Wales, including Aston, Birmingham City, Coventry, Edinburgh, Exeter, Glasgow, King's College London, Leeds, LSE, Manchester, Plymouth, Southampton, and others. What matters is the shape of the agenda. QAA is treating assessment design, feedback practice, AI, student partnership, and inclusion as one connected enhancement problem rather than a set of separate workstreams.
The strongest through-line is visibility. Several of the examples QAA highlights are designed to make learning and judgement easier for students to understand and easier for institutions to evidence. LSE shared work on oral assessment as a more AI-resilient mode of assessment. Birmingham City presented a five-pillar framework for GenAI-integrated assessment. King's discussed a "processfolio" model that makes the writing process and use of resources explicit. Waltham International College described an assessment and feedback framework built around "three timed pulses, six live signals and nine small evidence objects", creating an authorship trail and earlier feedback cycles.
Assessment should remain "visible, traceable and defensible".
The second major shift is how directly the summary ties student voice to assessment design and follow-through. Southampton's student intern model involves students in data analysis, focus groups, training, conferences, and report-writing. Edinburgh and Glasgow shared approaches that give students bounded but real choices over rubrics, assessment topics, weightings, and peer review. UCL's staff-student work on assessment literacy focused on students' confidence in understanding tasks and feedback, while Buckinghamshire New University described live, same-day marking conversations to improve feedback use. Exeter's multi-stage calibration process is especially notable for student voice teams: colleagues collect student feedback in spring, agree priorities and changes over summer, then return with revised guides and clearer expectations in autumn. QAA says this has been associated with marked improvements in student satisfaction and attainment.
First, institutions should stop treating assessment comments as one undifferentiated theme. The Roadshow examples make a clearer distinction between problems of assessment design, feedback quality, marking confidence, AI boundaries, and assessment literacy. If your surveys or module evaluations bundle these together, you may know that students are unhappy without knowing what needs to change first. That is where clearer coding frameworks and more specific open-text prompts matter.
Second, the summary strengthens the case for moving student feedback earlier in the cycle. Several of the practices QAA highlights do not wait for end-of-year survey results. They use focus groups, workshops, student interns, structured choice, and rapid feedback loops to test whether students understand the task, the criteria, and the purpose of assessment while there is still time to improve it. That connects closely to QAA's earlier assessment literacy toolkit update and the wider discussion of staff-student partnerships in assessment.
Third, this raises the evidential bar for quality and enhancement teams. If universities redesign assessment for AI, simplify mitigating circumstances, or promise faster and more useful feedback, they need a way to show whether students experienced those changes as intended. That means combining qualitative and quantitative evidence, documenting what changed, and checking whether later feedback shifted. If comments about unclear briefs, generic feedback, or AI confusion persist, the intervention may have changed the process on paper without changing the student experience.
At Student Voice AI, we see assessment-related comments carrying several issues at once: unclear criteria, late or generic feedback, uncertainty about how AI may be used, concerns about fairness, and confusion about workload. Structured analysis helps separate those themes across NSS, PTES, PRES, and module evaluations so teams can see whether they are dealing with a feedback problem, an assessment literacy problem, or a trust problem. That is why resources such as our NSS open-text analysis methodology and student feedback analysis glossary matter in practice.
The QAA summary also shows why open-text analysis should not be saved for reporting season. If institutions are using focus groups, assessment pilots, or student partnership projects to redesign assessment, they need a consistent way to compare what students said before and after the change. That can help teams test whether interventions actually improved clarity, confidence, and feedback usefulness, which is the issue behind long-running questions about what students think good feedback looks like.
Q: What should institutions do now?
A: Review your current assessment-related feedback and split it into clearer categories: assessment design, assessment literacy, marking confidence, feedback usefulness, feedback timeliness, and AI guidance. Then identify one or two high-friction areas to test with earlier student input, rather than waiting for the next annual survey cycle.
Q: When did this happen, and who is affected?
A: QAA published the Roadshow summary on 26 March 2026, following events held during the last full week of March 2026. The examples come from QAA member institutions across the UK, so the implications are sector-wide, but the summary does not create a new mandatory rule or formal regulatory requirement.
Q: What is the broader implication for student voice?
A: The broader implication is that student voice is moving further upstream. Instead of being used only to judge assessment after the fact, it is increasingly being used to shape assessment design, clarify expectations, and verify whether changes have actually improved students' experience of feedback and fairness.
[Quality Assurance Agency for Higher Education (QAA)]: "Roadshow draws together diverse approaches to assessment" Published: 2026-03-26
[Quality Assurance Agency for Higher Education (QAA)]: "QAA-funded CEP publishes toolkit for assessment literacy" Published: 2026-02-12
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.