QAA Assessment & Feedback Roadshow outcomes, and what they mean for student voice

Updated Apr 09, 2026

If students still leave assessment unsure what good work looks like, assessment reform has not landed. QAA's March 2026 Assessment & Feedback Roadshow summary matters because it pulls AI-resilient design, assessment literacy, and faster feedback into one clear student voice agenda. On 26 March 2026, the Quality Assurance Agency for Higher Education (QAA) published Roadshow draws together diverse approaches to assessment, its summary of the March 2026 QAA Assessment & Feedback Roadshow. For Student Experience teams, PVCs, and quality professionals, the practical takeaway is straightforward: institutions are being pushed to collect, interpret, and act on assessment feedback earlier, and with more precision.

What changed at the QAA Assessment & Feedback Roadshow

The summary is UK-wide in scope, but it is not a new regulatory requirement. QAA describes the Roadshow as four days of open events drawing examples from institutions across England, Scotland, and Wales, including Aston, Birmingham City, Coventry, Edinburgh, Exeter, Glasgow, King's College London, Leeds, LSE, Manchester, Plymouth, Southampton, and others. The useful signal is the shape of the agenda. QAA is treating assessment design, feedback practice, AI, student partnership, and inclusion as one connected enhancement problem rather than a set of separate workstreams. For institutions, that is a prompt to stop reviewing those issues in isolation.

The strongest through-line is visibility. Several of the examples QAA highlights are designed to make learning and judgement easier for students to understand and easier for institutions to evidence. LSE shared work on oral assessment as a more AI-resilient mode of assessment. Birmingham City presented a five-pillar framework for GenAI-integrated assessment. King's discussed a "processfolio" model that makes the writing process and use of resources explicit. Waltham International College described an assessment and feedback framework built around "three timed pulses, six live signals and nine small evidence objects", creating an authorship trail and earlier feedback cycles. The benefit is clearer evidence about what students were asked to do, how they did it, and where feedback can intervene sooner.

Assessment should remain "visible, traceable and defensible".

The second major shift is how directly the summary ties student voice to assessment design and follow-through. Southampton's student intern model involves students in data analysis, focus groups, training, conferences, and report writing. Edinburgh and Glasgow shared approaches that give students bounded but real choices over rubrics, assessment topics, weightings, and peer review. UCL's staff-student work on assessment literacy focused on students' confidence in understanding tasks and feedback, while Buckinghamshire New University described live, same-day marking conversations to improve feedback use. Exeter's multi-stage calibration process is especially notable for student voice teams: colleagues collect student feedback in spring, agree priorities and changes over summer, then return with revised guides and clearer expectations in autumn. QAA says this has been associated with marked improvements in student satisfaction and attainment. For institutions, the point is not to collect more opinion. It is to use earlier feedback to improve assessment while students can still feel the difference.

What this means for institutions

First, institutions should stop treating assessment comments as one undifferentiated theme. The Roadshow examples make a clearer distinction between problems of assessment design, feedback quality, marking confidence, AI boundaries, and assessment literacy, a split that mirrors the undergraduate student comment themes and categories. If your surveys or module evaluations bundle these together, you may know that students are unhappy without knowing what needs to change first. Clearer coding frameworks and more specific open-text prompts make it easier to pinpoint the real friction and act faster.

Second, the summary strengthens the case for moving student feedback earlier in the cycle. Several of the practices QAA highlights do not wait for end-of-year survey results. They use focus groups, workshops, student interns, structured choice, and rapid feedback loops to test whether students understand the task, the criteria, and the purpose of assessment while there is still time to improve it. That connects closely to QAA's earlier assessment literacy toolkit update and the wider discussion of staff-student partnerships in assessment. The payoff is practical: teams can correct confusion in-year rather than documenting it after the damage is done.

Third, this raises the evidential bar for quality and enhancement teams. If universities redesign assessment for AI, simplify mitigating circumstances, or promise faster and more useful feedback, they need a way to show whether students experienced those changes as intended, especially because faster feedback policies do not guarantee better NSS results on their own. That means combining qualitative and quantitative evidence, documenting what changed, and checking whether later feedback shifted. If comments about unclear briefs, generic feedback, or AI confusion persist, the intervention may have changed the process on paper without changing the student experience. The benefit of a stronger evidence model is that teams can defend what changed, and what still needs work, with much more confidence.

How student feedback analysis connects

Assessment-related comments rarely describe one problem at a time. At Student Voice Analytics, we see the same response carrying unclear criteria, late or generic feedback, uncertainty about how AI may be used, concerns about fairness, and confusion about workload. Structured analysis helps separate those themes across NSS, PTES, PRES, and module evaluations so teams can see whether they are dealing with a feedback problem, an assessment literacy problem, or a trust problem. That is why resources such as our NSS open-text analysis methodology and student feedback analysis glossary matter in practice.

The QAA summary also shows why open-text analysis should not be saved for reporting season. If institutions are using focus groups, assessment pilots, or student partnership projects to redesign assessment, they need a consistent way to compare what students said before and after the change. That helps teams test whether interventions actually improved clarity, confidence, and feedback usefulness, which is the issue behind long-running questions about what students think good feedback looks like. Used well, that kind of analysis turns assessment reform into something teams can evaluate, not just announce.

FAQ

Q: What should institutions do now?

A: Review your current assessment-related feedback and split it into clearer categories: assessment design, assessment literacy, marking confidence, feedback usefulness, feedback timeliness, and AI guidance. Then identify one or two high-friction areas to test with earlier student input, rather than waiting for the next annual survey cycle. That gives teams a sharper starting point for improvement and a clearer story to share with students about what changed.

Q: When did this happen, and who is affected?

A: QAA published the Roadshow summary on 26 March 2026, following events held during the last full week of March 2026. The examples come from QAA member institutions across the UK, so the implications are sector-wide, but the summary does not create a new mandatory rule or formal regulatory requirement.

Q: What is the broader implication for student voice?

A: The broader implication is that student voice in assessment and feedback is moving further upstream. Instead of being used only to judge assessment after the fact, it is increasingly being used to shape assessment design, clarify expectations, and verify whether changes have actually improved students' experience of feedback and fairness. The advantage is earlier correction, before confusion hardens into an end-of-year pattern.

References

[Quality Assurance Agency for Higher Education (QAA)]: "Roadshow draws together diverse approaches to assessment" Published: 2026-03-26

[Quality Assurance Agency for Higher Education (QAA)]: "QAA-funded CEP publishes toolkit for assessment literacy" Published: 2026-02-12

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.