The disconnect on what makes good feedback

By Andrew Carlin

Updated Mar 23, 2026

When feedback feels generic or arrives too late to use, students stop engaging with it. That helps explain why feedback remains one of the most criticised aspects of higher education courses [1, 2]. The literature also shows there is no settled definition of feedback, and that what passes for feedback in many institutions is still a one-way transfer of information rather than feedback for learning, with little time for students to act on it and improve [1]. Yet students do value feedback. The problem is that what students say makes feedback useful often differs from what the literature, and the educators designing feedback, prioritise.

Price, Handley and Millar argue that feedback in its current higher education form is too often treated as a product: something students receive but do not meaningfully reflect on, producing a behavioural response rather than a cognitive one [3]. Meanwhile, McFadden and Munns argue that "student engagement is a process rather than a product" [3, 4]. Hughes captures the consequence neatly, suggesting that trying to learn without reviewing is "like trying to fill the bath without putting the plug in!".

In a large-scale survey, Dawson et al. analysed responses from 406 staff and 400 students across two Australian universities [5]. Their findings revealed a clear disconnect. Educators tended to emphasise design factors such as timing, communication channel, and linked tasks. Students, by contrast, valued feedback that was high quality, detailed, personalised, and clearly actionable. In other words, staff focused more on the design of feedback systems, while students focused on the usefulness of the information they actually received. That difference matters because it cuts against much of the recent literature on feedback and feedforward in UK higher education and many proposed reforms [1-3, 6]. Even so, both staff and students still described feedback largely positively.

When students described useful feedback, they highlighted features such as:

  • Accurately completed, detailed rubrics
  • Digital recordings that were easy to understand
  • Face-to-face feedback, which feels more personal and is often seen as more thorough

In line with recent literature, Carless [2] defines feedback as including:

  • Information about performance or understanding from different sources, such as teachers, peers, or the student themselves
  • Students' responses to that information
  • A process in which learners make sense of comments about the quality of their work to inform future performance or learning strategies

That final point helps explain why feedback designers often prioritise timing and structure across both immediate and longer-term learning. Carless also defines three conditions for effective feedback:

  • Learners need to possess a concept of the standard being aimed for
  • They need to compare their current level of performance with that standard
  • They need to engage in appropriate action that helps close the gap between the two

This last condition carries an implication that appears repeatedly in the literature: feedback must arrive with enough time for students to understand it, question it, and turn it into action. Interestingly, timeliness did not feature strongly in the student survey as a marker of good feedback. Dawson et al. interpret this not as indifference to timing, but as a sign that students care less about the delay between submission and return than about whether there is still enough time to use the feedback well [5]. That interpretation points to the same conclusion. Students judge feedback primarily by its quality and usefulness, not by the framework built around it. The study also found that both staff and students gave relatively little attention to evaluative judgement, peer feedback, exemplars, and feedback moderation, despite how often these appear in contemporary feedback literature [5].

The practical takeaway is clear: there is a persistent disconnect between what learners value in feedback, what educators think makes feedback effective, how institutions design feedback, and what the literature recommends. If institutions want feedback that students will actually use, they need evidence on what students find clear, detailed, personal, and actionable. It is therefore sensible for educators and institutions to:

  • Conduct regular surveys on what students value in feedback, and use those findings to shape feedback frameworks
  • Ensure students are engaged and involved in any changes to feedback, from individual tasks to curriculum design
  • Engage students in feedback design, including through staff-student partnerships in assessment and collaborative approaches such as quality groups
  • Educate students on feedback design and self-regulation as they transition into higher education
  • Give students time and opportunities to question feedback, discuss it with markers, and apply it to the next task

FAQ

Q: How can student voice be more effectively integrated into the feedback process to ensure it aligns with students' needs and expectations?

A: Start by treating feedback design as a dialogue rather than a staff-only process. Surveys, focus groups, and one-to-one conversations can show what students find clear, vague, helpful, or hard to use. Institutions can then analyse that student voice feedback, including with text analysis tools for education, to identify recurring patterns at scale. This makes it easier to redesign feedback in ways that reflect student needs rather than assumptions, and it helps students see that their views are shaping practice.

Q: What role does text analysis play in improving the quality of feedback provided to students?

A: Text analysis helps institutions review large volumes of written student comments about feedback without relying on anecdote. It can surface recurring issues around clarity, tone, personalisation, and usefulness, alongside changes in satisfaction or engagement over time. That gives educators evidence to improve feedback processes and check whether those changes are working. For teams that are new to these methods, our student feedback analysis glossary explains the core terms.

Q: How can feedback be designed to be more actionable for students, according to their expressed preferences?

A: Actionable feedback is specific, detailed, and personalised enough for students to know what to do next. It should point to strengths, identify gaps, and offer clear steps for improvement. Format matters too: students may respond differently to written comments, audio, or face-to-face discussion. Most importantly, feedback needs to arrive while there is still time to use it on a later task. Regular student input, supported by analysis of student comments, helps institutions keep that balance right.

References

[1] Boud, D. and E. Molloy, Rethinking models of feedback for learning: the challenge of design. Assessment and evaluation in higher education, 2013. 38(6): p. 698--712.
DOI: 10.1080/02602938.2012.691462

[2] Carless, D., Feedback loops and the longer-term: towards feedback spirals. Assessment and evaluation in higher education, 2019. 44(5): p. 705--714.
DOI: 10.1080/02602938.2018.1531108

[3] Price, M., K. Handley, and J. Millar, Feedback: focusing attention on engagement. Studies in higher education (Dorchester-on-Thames), 2011. 36(8): p. 879--896.
DOI: 10.1080/03075079.2010.483513

[4] McFadden, M. and G. Munns, Student Engagement and the Social Relations of Pedagogy. British journal of sociology of education, 2002. 23(3): p. 357--366.
DOI: 10.1080/0142569022000015409

[5] Dawson, P., et al., What makes for effective feedback: staff and student perspectives. Assessment and evaluation in higher education, 2019. 44(1): p. 25--36.
DOI: 10.1080/02602938.2018.1467877

[6] Quinton, S. and T. Smallbone, Feeding forward: using feedback to promote student reflection and learning - a teaching model. Innovations in education and teaching international, 2010. 47(1): p. 125--135.
DOI: 10.1080/14703290903525911

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.