Updated Mar 17, 2026
At Student Voice AI, we spend a lot of time helping universities understand why students describe feedback as useful, generic, timely, or impossible to act on. That is why Jeanette L. O'Neil and Dorottya Nagy's paper in Assessment & Evaluation in Higher Education, "Perceptions of feedback: the use of self-reflection to improve student satisfaction", matters. For UK universities tracking NSS assessment-and-feedback results, module evaluation comments, and other student voice data, it asks a practical question: can a very small reflective step change how students receive assessment feedback?
Feedback remains one of the most consistently contested parts of the student experience. Universities often focus on turnaround times, marking consistency, and staff workload, all of which matter. But there is a quieter issue underneath those debates: students do not all approach feedback in the same state of mind. Some open comments ready to use them; others approach them defensively, superficially, or with little idea of what they need to look for.
O'Neil and Nagy test whether that pre-feedback mindset can be improved through a short self-reflection activity. The study compared five essay-type assignments where students received feedback in the usual way with a sixth assignment where students completed a brief self-reflection task immediately before receiving their feedback. The setting was a second-year undergraduate unit with 59 students, and the analysis compared reported satisfaction with feedback across lower-, medium-, and higher-performing students.
For UK higher education teams, that is a useful framing. It shifts the question from "How quickly did feedback arrive?" to "What helps students make use of feedback once it appears?" That is especially relevant when institutions are trying to improve weak assessment-and-feedback scores but are unsure whether the real issue is delay, quality, tone, or uptake.
The clearest result is that lower- and medium-performing students were more satisfied with feedback after the self-reflection step was introduced. That matters because these are often the students institutions are most concerned about in progression, continuation, and attainment discussions. If a simple reflective prompt helps them process feedback more constructively, it could be a relatively low-cost way to improve a part of the student experience that often feels difficult to shift.
Higher-performing students did not show the same significant change. The implication is not that reflection is irrelevant for stronger students, but that the benefit may be concentrated where feedback is hardest to absorb. Students who are already performing well may already have routines for interpreting comments, whereas students who are struggling may need more support to move from emotional reaction to productive action.
"Self-reflection appears to be particularly important for lower-performing students."
The paper therefore suggests that feedback dissatisfaction is not evenly distributed across a cohort. Universities can miss that if they only look at whole-class averages or institution-level survey scores. A module may appear to be performing adequately overall while still failing the students who most need actionable guidance. That is one reason free-text comments and subgroup analysis matter: they reveal who is finding feedback unclear, discouraging, or hard to use.
There is also a more practical insight in the intervention itself. The reflective task came immediately before students saw their feedback. In other words, timing mattered. The authors were not testing a general culture of reflection in the abstract; they were testing whether a short pause before feedback could change receptiveness. For UK teams, that points towards small workflow changes in the VLE, feedback cover sheets, or in-class debriefs, rather than a major redesign of assessment policy.
The paper does not claim that self-reflection automatically improves attainment, and that distinction matters. What it shows is a change in how students perceived feedback. For Student Experience and Market Insights teams, that is still important. Perception shapes engagement, and engagement shapes whether comments are likely to be used at all.
The first implication is to build a brief reflective prompt into feedback release, especially on modules where feedback satisfaction is weak or where lower-performing students are overrepresented in negative comments. Simple questions such as "What do you think you did well?", "What were you least confident about?", and "What do you most want this feedback to clarify?" can help students approach comments with a clearer purpose.
The second implication is to segment feedback evidence more carefully. If lower-performing students benefit more from reflective scaffolding, then institutions should not rely only on headline averages when reviewing assessment and feedback. Combine survey scores with open-text comments, and where possible compare patterns by attainment band, level of study, or course context. That makes it easier to see whether one feedback process is working very differently for different groups.
The third implication is methodological. Universities should not treat feedback improvement as only a turnaround-time problem. This paper sits neatly alongside wider evidence that usefulness, clarity, and actionability matter as much as speed. Student Voice Analytics fits that gap well: large-scale comment analysis can distinguish complaints about timing from complaints about vagueness, tone, or lack of next steps, and can show whether those concerns cluster among particular groups of students.
Q: How can universities apply this finding without creating a large new workload for staff?
A: Start with a short reflective prompt at the point feedback is released, rather than a whole new assessment process. A two- or three-question form in the VLE, a structured feedback cover sheet, or a brief in-class reflection before comments are opened can all work. The key is to prepare students to read feedback actively, not passively.
Q: What are the main methodological limits of this study?
A: The study was based on one second-year undergraduate unit with 59 students, and the outcome was reported satisfaction with feedback rather than direct learning gain. That means universities should treat it as promising evidence, not a universal rule. The practical value lies in the intervention being simple enough to test locally in module evaluations or feedback pilots.
Q: What does this mean for student voice and survey analysis more broadly?
A: It suggests that weak feedback scores may sometimes reflect how students receive feedback, not only what staff write. For student voice work, that means combining scaled items with open-text prompts and looking at subgroup patterns carefully. If lower-performing students repeatedly describe feedback as hard to use, reflection and guidance may need as much attention as speed or volume.
[Paper Source]: Jeanette L. O'Neil and Dorottya Nagy "Perceptions of feedback: the use of self-reflection to improve student satisfaction" DOI: 10.1080/02602938.2025.2572034
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.