Updated May 05, 2026
Assessment problems rarely start when students complain about them. They usually start earlier, in the brief, the sequencing, or the kind of feedback a task makes possible. That is why Sheila Amici-Dargan, Kaisa Ilmari, Annabelle Buckley, Mathusha Chandrakumaran, Carol Evans and Stephen Rutherford's Student Engagement in Higher Education Journal paper, "Empowering students as partners in enhancing assessment practice, using the research-informed EAT framework", matters for UK universities using student voice in assessment and feedback to improve assessment. The paper shows what changes when students are given a structured way to review assessments with staff, rather than being asked for a reaction only after marks are released.
Many universities already collect comments about assessment through module evaluations, NSS free text, staff-student committees, or local review exercises. The harder problem is turning those comments into a usable review of the assessment itself. Students may know that a task felt unclear, heavy, or hard to learn from, but unless institutions give them a better structure for saying why, the response often stays at the level of general dissatisfaction.
This paper addresses that gap through the Equity, Agency, and Transparency (EAT) framework, a research-informed approach to assessment review that organises discussion around assessment literacy, feedback, and design. The authors describe a series of activities in which students used EAT resources to review assessments and co-design interventions to improve assessment literacy and feedback. For UK Student Experience, Quality, and educational development teams, the question is highly practical: how do you move student partnership in assessment from a good intention to a method that produces clearer enhancement decisions?
The clearest lesson is that student partnership works better when it has a framework. The paper argues that students can contribute more usefully when they are not asked only whether an assessment was good or bad, but are instead given a shared language for discussing what the assessment is doing. EAT matters because it breaks assessment down into dimensions that staff and students can actually examine together. That makes partnership more specific, and therefore more actionable.
The paper also widens when student voice can influence assessment. The abstract describes three routes: students can actively understand the core elements of the assessment, co-design assessment tasks, or help with post hoc revision of assessments after delivery. That is a useful shift for UK institutions. It means student voice does not have to be trapped at the end of the cycle in a survey report. Students can help shape assessment before, during, and after it runs.
The authors summarise the practical value of the approach clearly:
"EAT effective in providing opportunities for students to participate in discourse around assessment"
A second important finding is that partnership here is not only about gathering institutional intelligence. It is also about building student capability. The paper links assessment improvement to self-regulatory skills, agency, independence, and criticality. In other words, students are not treated as external reviewers commenting on a finished product. They are positioned as participants who can better understand assessment itself and help peers do the same. That is especially useful for universities trying to improve the student experience without reducing student voice to satisfaction scoring alone.
The paper is also strong on the bridge from review to intervention. The point is not simply to run a consultation exercise and collect another set of opinions. EAT is used to start conversations about enhancement needs and to scaffold peer-led intervention. For UK teams, that matters because one of the common weaknesses in assessment feedback systems is that students are asked what they think, but not drawn into a clear improvement process afterwards. The logic here fits closely with evidence that student evaluations improve when staff and students redesign them together. Structured partnership produces better evidence than symbolic consultation.
The framework also helps separate different kinds of assessment complaint that are easily bundled together in open comments. Students may criticise an assessment because the task felt opaque, because the feedback was thin, or because they did not understand how to act on it. Those are related problems, but they are not identical. That distinction matters because students often read feedback as fairer when it is more usable, not simply more reassuring, a pattern that appears in research on why feedback comments feel fairer when they are actionable. A framework like EAT gives institutions a better chance of diagnosing the right problem first.
For UK universities, the first implication is to stop asking only broad assessment satisfaction questions when the goal is improvement. If institutions want assessment feedback that can guide change, they should give students prompts that distinguish design, feedback, and assessment literacy. That produces better evidence than a single fairness or satisfaction item, and it makes later review conversations much more precise.
Second, institutions should bring students into assessment review earlier, not only after the module has ended. A small staff-student working group can review a task brief, rubric, feedback process, or sequence of assessments before the next run of a module. That is especially helpful when teams already know from previous comments where confusion or frustration keeps recurring. The benefit is earlier correction of preventable problems rather than another cycle of predictable complaints.
Third, universities should combine partnership work with systematic analysis of open comments. This is where Student Voice Analytics fits naturally. Assessment comments often mix together workload, clarity, fairness, feedback usefulness, organisation, and timing. A repeatable NSS open-text analysis methodology helps teams see which of those themes are actually repeating across modules and cohorts before they begin a redesign conversation. The benefit is a more grounded discussion and less drift into anecdote or the loudest recent complaint.
Finally, institutions should show what changed after students helped review assessment practice. If student partners spend time diagnosing problems and nobody can see the response, the exercise will lose credibility quickly. The lesson here is the same as in wider work on why student feedback only works when universities show what changed: partnership needs visible follow-through. The benefit is stronger trust in the process and better participation next time.
Q: How can a university apply this paper when reviewing assessment across a programme or department?
A: Start with one or two assessments that already attract recurring comments. Ask a small student-staff group to review the task using a simple framework covering design, feedback, and assessment literacy. Then compare that discussion with existing survey comments and act on the overlap. That approach is practical, bounded, and more likely to produce clear changes than a general invitation for "feedback on assessment".
Q: What are the methodological limits of this paper?
A: This is a practice-based case study centred on a series of assessment review activities, not a sector-wide comparative evaluation. The abstract supports the case for EAT as a useful framework, but it does not offer a universal benchmark or a controlled claim that the same process will work everywhere unchanged. UK teams should use it as an implementation model and test the approach against their own assessment feedback data.
Q: What does this change about student voice more broadly?
A: It suggests that student voice works best when students are given tools to diagnose their experience, not only opportunities to rate it. Structured partnership can make feedback on assessment clearer, more specific, and easier to act on. That matters because universities often have no shortage of comments. What they need is a better route from those comments to improvement.
[Paper Source]: Sheila Amici-Dargan, Kaisa Ilmari, Annabelle Buckley, Mathusha Chandrakumaran, Carol Evans and Stephen Rutherford "Empowering students as partners in enhancing assessment practice, using the research-informed EAT framework" DOI: 10.66561/sehej.v7i3.1246
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.