Updated May 04, 2026
Assessment feedback is one of the quickest places where student voice becomes operational pressure. On 15 April 2026, Queen Mary University of London announced a university-wide pilot of EduMark AI, an educator-controlled platform designed to support assessment and feedback. For Student Experience teams, PVCs, and quality professionals, that matters because the pilot turns a familiar student concern, slow, inconsistent, or hard-to-use feedback, into a live institutional test of whether AI can improve the experience without weakening trust or academic oversight.
The immediate change is scale. Queen Mary says EduMark AI is now being piloted across the university, with academics from different parts of the institution invited to express interest and follow-up training and guidance promised for participating staff. That moves the project beyond a single local trial. It becomes a broader institutional experiment in whether AI-supported assessment feedback can be integrated into normal marking workflows while keeping academic judgement with staff.
The core design claim is also clear. Queen Mary says the platform was developed to address workload in marking and feedback while improving the clarity, structure, and timeliness of the feedback students receive. The university also says earlier pilot work showed an approximate 60 per cent reduction in marking time and that student responses were encouraging, with participants highlighting the clarity, specificity, and usefulness of the comments they received.
"all marks and feedback are reviewed and approved by staff before release."
That human-oversight point is what makes the announcement more than another generic AI pilot. Earlier Queen Mary updates help explain the direction of travel. In February 2026, the university said EduMark AI had received Google Cloud support to strengthen scalable deployment, secure data handling, and enhanced analytics capabilities, while current work focused on assessment, data protection, and the ethical use of AI. A 2025 award notice described the tool as using structured rubrics and prompt frameworks to improve fairness, accuracy, and efficiency, with plans to expand pilot modules and share practice across the institution. Taken together, those updates show a pilot moving from proof of concept towards wider operational use, not a one-off showcase.
The first implication is that faster feedback is not the same thing as better feedback. Universities often hear student concerns about feedback under one broad heading, but the underlying problems differ. Some students mean turnaround time. Others mean vague comments, inconsistent markers, unclear criteria, or feedback that arrives too late to use. Queen Mary's pilot is relevant because it tests whether AI can improve some of those pain points at once, but institutions still need to ask which part of the problem students are actually experiencing. Our summary of digital assessment quality priorities is useful here, because staff and students do not always rate the same aspects of assessment highly.
The second implication is governance. Queen Mary's announcement repeatedly stresses educator control, and its earlier February update adds references to secure data handling and ethical use. That is the right instinct. If universities introduce AI into assessment feedback, students will reasonably want to know where it is being used, what staff still review personally, how consistency is checked, and whether the output is genuinely helping learning. This is also where the wider student trust question sits. Our review of students using Generative AI for feedback but still trusting teachers more is a useful reminder that students may welcome speed and structure without automatically treating AI-assisted feedback as equally credible.
The third implication is evidence. A pilot like this should not only measure staff time saved. Institutions should also decide in advance how they will test impact on the student experience. That means looking at module evaluations, assessment complaints, student representative feedback, and open-text comments before and after rollout. Without that baseline, a university may know the workflow is faster but still not know whether students experienced the feedback as clearer, fairer, or more actionable. The practical takeaway is straightforward: if AI-supported assessment feedback is going to scale, it needs a student evidence plan as well as a technical one.
This matters for analysis because assessment and feedback comments are rarely about one thing. Students often use the word "feedback" to describe several separate issues at once: marking speed, level of detail, tone, fairness, criteria clarity, and whether the comments helped them improve. Once AI-supported assessment feedback enters that picture, institutions will also need to listen for a new layer of concern, whether students feel the feedback is generic, trustworthy, and clearly owned by staff.
That is where structured open-text analysis becomes more useful. A workflow such as our NSS open-text analysis methodology helps teams separate feedback comments into clearer themes before and after a pilot, instead of treating all dissatisfaction as one problem. Student Voice Analytics is useful when institutions want to compare those patterns across modules or schools with one reproducible method and a defensible audit trail. The product link is secondary here. The main point is institutional: if universities pilot AI in assessment feedback, they should be ready to analyse what students then say about it in a much more precise way.
Q: What should institutions do now if they are piloting AI-supported assessment feedback?
A: Start with a bounded pilot and make the rules explicit. Tell students where AI is being used, what staff still review, what data is in scope, and how you will test success. Then collect baseline evidence on feedback quality, not just staff workload, so you can see whether students experience the change as more useful.
Q: What is the timeline and scope of Queen Mary's EduMark AI pilot?
A: Queen Mary published the university-wide pilot announcement on 15 April 2026. The scope is institution-specific rather than sector-wide: academics across Queen Mary are being invited to participate, with training and guidance to follow. Earlier milestones include a Google Cloud support announcement on 16 February 2026 and an internal institutional award in October 2025, which show the pilot has been developing over time rather than appearing suddenly.
Q: What is the broader implication for student voice?
A: AI in assessment feedback will change what universities need to listen for in student comments. Teams will need to separate speed from usefulness, consistency from trust, and automation from academic ownership. In other words, AI may improve part of the feedback process, but institutions will still need strong student voice evidence to know whether it improved the experience.
[Queen Mary University of London]: "EduMark AI: Queen Mary-wide pilot to support assessment and feedback" Published: 2026-04-15
[Queen Mary University of London]: "EduMark AI awarded Google Cloud support for next phase" Published: 2026-02-16
[Queen Mary University of London]: "EduMark AI wins Queen Mary's highest institutional honour – the President & Principal's Prize at the Education Excellence Awards 2025" Published: 2025-10-28
[Queen Mary University of London]: "EduMark AI showcased and Beta App released at Festival of Education: Pioneering AI in assessment and feedback" Published: 2025-06-10
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.