Updated Apr 28, 2026
AI feedback in higher education is becoming easier to deploy, but the harder question is whether students see it as worth acting on. On 31 March 2026, the University of Surrey published AI could undermine meaningful learning unless feedback stays rooted in connection, researchers recommend, highlighting new research on how generative AI is reshaping feedback. For Student Experience teams, PVCs, and quality professionals, this matters because universities that use AI in feedback now need better evidence on trust, care, and learning, not just faster turnaround, especially given recent findings that students use Generative AI for feedback, but trust teachers more.
The immediate change is not a new regulation. It is that a UK university has now put a clear public warning around a live sector trend. Surrey says the underlying paper, published in Assessment & Evaluation in Higher Education, finds that while AI can generate responses at speed and scale, it cannot fully replicate the judgement, empathy, and context that make feedback effective. Instead, the authors argue for a "care-full" approach that treats feedback as an ongoing process of dialogue, reflection, and growth, rather than a one-way transfer of comments.
The paper itself, published online on 18 March 2026, goes further than a general caution about overuse. It sets out ten principles for feedback in the age of AI, including that feedback is a process, feedback is relational, learning should take priority over technological efficiency, and feedback processes should be designed in conversation with learners and educators. That matters because it reframes the debate. The question is not only whether AI can help produce feedback more quickly, but whether the overall feedback process still supports learning in the way universities intend.
"The key question isn't what AI can do, it's what it should do."
Surrey's summary adds two practical cautions that institutions should not miss. First, students tend to place greater trust in feedback from human educators. Second, AI may be useful as a low-pressure way to explore ideas, but over-reliance could reduce meaningful interaction and worsen inequalities if some students benefit more than others. This is sector evidence rather than sector policy, but it arrives at a point when many universities are actively reviewing AI-supported assessment, tutoring, and feedback workflows.
The first implication is that universities should stop treating AI feedback pilots as productivity projects only. The important question is not whether AI can draft comments quickly, but whether students understand the purpose, trust the output, and know when a human response still matters. That is squarely a student voice in assessment and feedback issue, because feedback only improves learning if students engage with it and use it, not simply if it is delivered more efficiently.
The second implication is about use case. Institutions should separate low-stakes and high-stakes uses much more carefully. AI may help with early explanation, practice, or idea generation. Feedback tied to standards, progression, or student confidence needs clearer human oversight, clearer communication, and closer review. That is an inference from the Surrey paper and release rather than an explicit institutional rule, but it is a reasonable one: once trust drops, speed becomes a weak proxy for quality.
The third implication is equity. If some students use AI comfortably while others see it as risky, impersonal, or hard to interpret, the same tool may widen support gaps rather than narrow them. Teams should therefore collect feedback that distinguishes usefulness from trust, and efficiency from learning value, before any workflow becomes normal practice. The practical takeaway is simple: define ownership, ask students early, and make the response visible.
This is where open-text evidence matters. If universities ask students only whether AI feedback was useful, they will miss the distinctions that shape whether it improves learning: was it clear, did it feel generic, did students trust it enough to act, did it reduce shame, or did it remove the human relationship that made the advice credible? Those are the questions that usually surface in comments before they settle into an annual metric.
At Student Voice AI, we see this as a governance issue as much as an analytics issue. If institutions start collecting AI-related comments through module evaluations, pilot surveys, or targeted reviews, they need a method that can separate trust, care, fairness, clarity, and usefulness without collapsing them into one AI theme. That is where our comparison of Student Voice Analytics and generic LLMs is useful, and why the student comment analysis governance checklist matters before an AI feedback pilot scales.
Q: What should institutions do now if they are testing AI feedback?
A: Start by mapping where AI already touches feedback, whether that is formative support, draft comments, tutoring, or assessment guidance. Then add one short question and one open-text prompt to the next relevant module evaluation or pilot review, separating usefulness from trust and clarity from care. Give one named team responsibility for follow-up and publish a short response to students so the evidence does not disappear into an internal note.
Q: What is the timeline and scope of this Surrey development?
A: The University of Surrey press release was published on 31 March 2026. The underlying paper, "The care-full craft of feedback in an age of generative AI", was published online on 18 March 2026 in Assessment & Evaluation in Higher Education. This is research-led sector evidence for higher education rather than an OfS or QAA rule change.
Q: What is the broader implication for student voice?
A: The broader implication is that AI feedback should now be treated as a student voice and quality design issue, not only a technical one. Universities will need evidence not just on whether AI is used, but on whether it supports relationships, confidence, and action in ways students recognise as educationally credible.
[University of Surrey]: "AI could undermine meaningful learning unless feedback stays rooted in connection, researchers recommend" Published: 2026-03-31
[Assessment & Evaluation in Higher Education]: "The care-full craft of feedback in an age of generative AI" Published: 2026-03-18
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.