Published Apr 11, 2022 · Updated Mar 12, 2026
If feedback arrives after a module ends, it cannot improve learning, it can only explain a grade. That tension helps explain why feedback remains one of the most criticised aspects of higher education courses [1, 2].
Feedback should sit at the heart of education, but it has often been shaped more by habit than by research, disciplinary context, or intentional design. The irony is especially clear in subjects such as engineering, where feedback is central to the discipline itself. Treating feedback on an arts degree as if it should work in exactly the same way as feedback on a law or computer science degree now feels outdated.
So what is feedback? One of the earlier definitions from the discipline of cybernetics is [3]:
"Feedback is the control of a system by reinserting into the system the results of its performance".
In higher education terms, this means assessing a student's performance and responding early enough for that response to shape what the student does next. That timing matters. Feedback is only useful when there is still enough time, and enough overlap in learning outcomes, for students to turn comments into action.
However, modularised course structures, fewer assessments, and growing class sizes often make that kind of loop difficult to achieve. In many classrooms, what is called feedback is really just information: a description of performance at one moment in time. True feedback is different because it changes future performance. Boud and Molloy argue that this distinction becomes clearer when you separate Feedback Mark 1 from Feedback Mark 2 [4].
Feedback Mark 1 keeps the essential logic of feedback intact. It gives students information on current performance in time to influence later performance. That is very different from receiving comments on a final, one-off exam after a module has ended, which is still common in higher education. For Feedback Mark 1 to work, there also needs to be overlap between the learning outcomes of earlier and later assignments.
If students do not improve on the next task, the question should not fall only on the learner. It should also fall on the quality, timing, and usability of the feedback, and on whether the learning outcome was realistically achievable. Even so, Feedback Mark 1 still assumes that someone else identifies and provides the information the learner needs.
Feedback Mark 2 goes further by making learners active participants in the process and creating "sustainable feedback". In practice, sustainable feedback has four key characteristics:
Enacting Feedback Mark 2 depends on three connected elements of a learning system:
Feedback Mark 2 therefore asks more of students because it relies on active learning. Not every student arrives with strong self-regulation skills, and not every educator assumes that they will. But active learning is itself a skill that can be taught. Students who are less confident in self-regulation need structured opportunities to practise it and improve.
This is the practical value of rethinking feedback design. When feedback moves from a one-way transmission of information to a continuous process embedded across the curriculum, students gain more control over their learning and a clearer sense of how to improve.
For educators, the immediate priorities are to:
For institutions, the wider design challenge is to build feedback into the curriculum rather than attaching it to isolated assignments. That means creating programmes that:
The core takeaway is simple: feedback design should be judged not by whether comments were delivered, but by whether students can use them to learn better next time.
Q: How can text analysis tools be utilised to measure the impact of Feedback Mark 2 on student engagement and learning outcomes?
A: Text analysis tools can help institutions see whether students describe feedback as timely, specific, and useful. By analysing the language in student comments and educator responses, teams can spot recurring themes, points of frustration, and signs that feedback is or is not helping students take action. Sentiment and thematic analysis can also reveal how feedback affects motivation, confidence, and engagement. That makes it easier to improve feedback design with evidence rather than anecdote.
Q: In what ways can Student Voice be systematically incorporated into the feedback process to ensure that feedback is not only sustainable but also aligned with student needs and perspectives?
A: Student Voice becomes more useful when it is built into the feedback process rather than collected afterwards as a separate exercise. Institutions can do that through regular surveys, focus groups, module reviews, and structured reflection tasks that ask students what made feedback clear, unclear, actionable, or hard to use. Students can also help shape feedback criteria and assessment design, which turns feedback into a genuine dialogue rather than a one-way transmission. The result is feedback that is more relevant to student needs and more sustainable over time.
Q: What are the challenges and barriers to implementing Feedback Mark 2 in diverse educational settings, and how can they be overcome?
A: The main barriers are familiar ones: educators may be used to traditional feedback methods, students may vary in their readiness for self-directed learning, and large class sizes can make dialogue harder to sustain. The best response is to treat Feedback Mark 2 as a design change rather than a one-off intervention. Professional development can help staff build confidence, small pilot changes can prove the value of a more dialogic approach, and carefully designed tasks can make active feedback manageable even at scale. Over time, that makes Feedback Mark 2 more realistic and more beneficial for both staff and students.
[1] Krause, K.-L., et al., The first year experience in Australian universities: Findings from a decade of national studies. 2005, Citeseer.
[2] The national student survey: Findings and trends 2006–2010. 2011, Higher Education Funding Council for England (HEFCE).
[3] Wiener, N., The human use of human beings: Cybernetics and society. 1988: Da Capo Press.
[4] Boud, D. and E. Molloy, Rethinking models of feedback for learning: the challenge of design. Assessment and evaluation in higher education, 2013. 38(6): p. 698--712.
DOI: 10.1080/02602938.2012.691462
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.