Updated Apr 29, 2026
Late assessment feedback is now showing up as an AI issue, not just a marking issue. On 23 March 2026, Wonkhe published Trained to stop learning: How students are experiencing assessment and learning in an age of AI, a UK-wide research release arguing that assessment design, feedback timing, and unclear AI rules are shaping how students use generative AI. For Student Experience teams, PVCs, and quality professionals working on student voice, that matters because the report turns AI use into a practical evidence question: are students using AI to deepen understanding, or to compensate for feedback and assessment systems that are not working well enough?
This is not a new regulation. It is a new UK evidence point. Wonkhe says its research combined focus groups in February and March 2026 with a survey of 1,055 students across 52 HE providers in the UK, weighted for gender and level of study. The headline claim is that assessment design matters more than AI policy alone. The article says nearly half of students worry their grades do not reflect what they actually know, 38% say they have submitted work they could not fully explain, and only 21% feel their course primarily rewards thinking and reasoning.
The part most relevant to student feedback practice is Finding 10. The report says feedback often arrives after students have already started the next assignment, which turns a nominally developmental process into a largely summative one. It also argues that where briefs and marking criteria are unclear, students use AI as a sensemaking tool rather than only as a shortcut.
"We usually get feedback from the first assessment after we've started the second"
The report's recommendations make the institutional direction clearer. Wonkhe argues that universities should build short verification moments into assessment, replace generic AI declaration forms with module-level guidance, and audit whether feedback arrives in time to inform later work. The scope is sector-wide rather than institution-specific, but the practical message is hard to miss: late feedback, unclear assessment expectations, and AI use should now be reviewed as one connected student experience problem.
First, institutions should stop handling AI guidance and feedback quality as separate workstreams. If feedback timing, criteria clarity, and AI use are linked in student behaviour, then module evaluations and pulse surveys need to capture those links directly. A question set that only asks whether students found feedback useful may miss whether it arrived soon enough to change the next task, or whether students turned to AI because the brief felt under-specified. That aligns closely with QAA's recent assessment and feedback roadshow findings, which also point to earlier, more precise feedback evidence.
Second, the report raises the bar for module-level evidence. Generic institutional AI principles are unlikely to help if different tutors on the same programme interpret them differently. Quality teams should check whether local surveys, rep systems, and annual monitoring can separate problems of feedback timing, criteria clarity, accessibility, workload, and AI permission. The benefit is simple: once those issues are separated, it is easier to assign action to assessment leads, module teams, digital education, or student support rather than treating everything as a vague AI concern.
Third, this is a fairness and support issue as well as an integrity issue. Wonkhe says disabled students are using AI for cognitive support that formal adjustments are not, in their experience, meeting, and that women are more likely to carry AI anxiety without using the tools themselves. For institutions, that means AI-related student voice cannot be coded as one theme. Teams need to know which comments are really about support gaps, which are about inconsistent guidance, and which are about assessment design. That is where better evidence reduces the risk of a blunt response.
This is exactly the kind of issue where open-text analysis matters. A closed question can tell you that students dislike feedback timing or feel uncertain about AI. It cannot show whether the real problem is late return, thin comments, unclear briefs, contradictory tutor guidance, or a lack of accessible study support. A governed approach such as our NSS open-text analysis methodology helps institutions separate those themes and compare them across module evaluations, local pulses, NSS, PTES, and other routes.
Where universities are already collecting large volumes of assessment comments, Student Voice Analytics can help turn those mixed concerns into a clearer evidence trail for course teams and committees. The important point is not to add more AI rhetoric to reporting. It is to connect comments about feedback, workload, and guidance to a method that teams can defend, document, and revisit, which is why our student comment analysis governance checklist is relevant here.
Q: What should institutions do now?
A: Start with an audit of assessment-related feedback questions and action routes. Check whether your module evaluations or pulse surveys distinguish feedback timing, feedback usefulness, brief clarity, and AI guidance. Then review one or two high-volume comment sets before the next survey cycle to see whether students are using AI as a workaround for problems you can fix.
Q: What is the timeline and scope of this change?
A: Wonkhe published the article and full report on 23 March 2026. The underlying research used focus groups conducted in February and March 2026 and a UK survey of 1,055 students from 52 HE providers. It is sector research, not a regulatory change, but it speaks directly to assessment and quality practice across the UK.
Q: What is the broader implication for student voice?
A: The broader implication is that student voice on AI cannot be treated as a standalone technology topic. When students say they are using AI because feedback is late, criteria are unclear, or guidance is inconsistent, they are describing problems in assessment design and institutional follow-through. Universities that analyse those comments well will have a much stronger basis for redesigning assessment before the issue hardens into NSS or course-level patterns.
[Wonkhe]: "Trained to stop learning: How students are experiencing assessment and learning in an age of AI" Published: 2026-03-23
[Wonkhe]: "Trained to stop learning" Published: 2026-03-23
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.