End-of-unit surveys miss the moment when feedback can still change the course

Updated May 07, 2026

Student feedback is hardest to trust when it arrives after the teaching has already finished. At Student Voice AI, we see the same problem repeatedly in evaluation comments: universities collect a large volume of feedback, but current students often never feel the benefit. That is why Steve Briggs, Julie Brunton, Ruki Heritage and Amy McLaughlan's Student Engagement in Higher Education Journal paper, "Is it time to move away from end of unit surveys?", matters. For UK universities trying to strengthen closing the feedback loop on student voice initiatives, the paper makes a direct challenge: if feedback comes too late to change anything, the method itself may be part of the problem.

Context and research question

The case study focuses on the University of Bedfordshire, which had used a standardised end-of-unit survey, the Bedfordshire Unit Survey or BUS, for more than fifteen years. The institutional logic was familiar across UK higher education: use one common survey to compare units, monitor performance, and inform enhancement planning. The problem was that response rates had been falling since around 2018, and even a relaunch with fewer, refocused questions did not reverse the decline.

The authors then asked a more useful question than "How do we get more students to complete the survey?". They examined whether end-of-unit surveys were still the right mechanism at all. Their case study brings together the rationale for removing the survey, the design of a replacement Student Voice Principles and Framework, and the first evaluation evidence from that new approach. For teams reviewing module evaluation design, that question connects directly with wider work on how teaching evaluations improve when staff and students redesign them together.

Key findings

The first finding is that repetition and survey saturation were undermining participation long before any analysis began. Because the BUS used one common question set across units, students were repeatedly asked to complete near-identical surveys across their programme. The paper notes that a student on a typical undergraduate pathway could be asked to complete the same end-of-unit survey at least ten times. That sat alongside NSS, UKES, service surveys, Students' Union questionnaires, and accreditation-related surveys. The effect was not simply lower response rates. It was a gradual loss of belief that this was a meaningful use of students' time.

The second finding is that timing was not a minor design flaw, but a structural weakness. Students often saw end-of-unit feedback as something that would mainly help the next cohort rather than themselves. The paper captures that neatly in one staff summary:

"the unit survey was an exercise in altruism"

That line matters because it reframes the usual problem. Low response is not just a motivation issue. It can be a rational response to a process that asks students to invest effort after the point when change is possible.

The third finding is that Bedfordshire replaced one late survey with a four-stage feedback loop and three distinct feedback routes. The new framework runs through Ask, Analyse, Act, and Acknowledge. Students can give feedback in class or meetings through mid-point exercises, through course representatives and student partner roles, or through a 24/7 online form. Feedback is then thematically analysed, mapped to standard categories such as teaching, learning opportunities, and assessment and feedback, and discussed through termly School Student Experience Committees. This is important because the paper is not arguing for less student voice. It is arguing for a more usable architecture around it.

The early evaluation evidence is striking. Online form submissions rose from 135 in AY22/23 to 253 in AY23/24, then to 460 in just the first term of AY24/25. The online route also improved representation for groups that had been less visible through elected representative structures, including international fee payers, postgraduate taught students, and some ethnic groups. At the time of writing, the paper reports that over 90% of Term 1 AY24/25 feedback submitted through the online form had already been completed, with the feedback loop resolved to the student's satisfaction. For UK teams, that is a strong practical signal: earlier and more varied routes can produce not only more feedback, but more inclusive and more actionable feedback.

The final finding is that removing the end-of-unit survey did not remove complexity. The new system created fresh demands around committee design, staff and representative training, meeting attendance, and how to interpret contradictory feedback from different channels. That is an important strength of the paper. It does not pretend that a multi-modal model runs itself. It shows that a better student voice system still needs governance, role clarity, and disciplined follow-through.

Practical implications

The first implication for UK universities is to stop treating end-of-unit surveys as the default instrument simply because they are familiar. If a survey lands after the teaching is finished, it can be useful for retrospective review, but it is weak at improving the experience of the students who completed it. Mid-point prompts, short in-class exercises, and always-on channels create more chances to act while the unit is still live. The benefit is immediate: students can see that feedback changes something for their own cohort, not only the next one.

Second, universities should build a student voice system rather than rely on a single feedback event. Bedfordshire's model works because it combines course representatives, mid-point activities, and a permanent online route, then brings them together through one loop. That is a practical model for institutions trying to move beyond survey fatigue while keeping evidence coherent. If you are reviewing how to compare themes across channels, the NSS open-text analysis methodology is useful because it shows how consistent categorisation helps different feedback sources speak to each other.

Third, institutions should separate feedback about teaching from feedback about the wider operating environment. One reason the paper criticises end-of-unit surveys is that they can mix up academic practice with timetabling, registration, transport, or other institutional frustrations. That creates unfairness for staff and muddier evidence for decision-makers. Multi-channel systems can route problems to the right owner more quickly, which is one reason Student Voice Analytics fits naturally here: it helps teams distinguish recurring themes across comments instead of treating every open-text issue as the same kind of signal. The benefit is cleaner evidence and faster action.

Finally, universities should treat acknowledgement as part of the method, not the communication plan at the end. The paper's most transferable lesson is not only "collect feedback earlier". It is "design the return path". Students need to know what happened, who acted, and where issues remain unresolved. That takes governance discipline as well as good intentions, which is why a student comment analysis governance checklist becomes useful once institutions start combining multiple routes and more open-text evidence. The benefit is credibility: student voice becomes something students can recognise as consequential.

FAQ

Q: How can a university move beyond end-of-unit surveys without losing comparability across modules and courses?

A: Keep the channels flexible, but keep the analysis structure consistent. Bedfordshire's model still uses common themes and committee reporting, even though students can feed back in different ways. In practice, that means agreeing shared categories, documenting how issues are logged, and reporting patterns at course or school level rather than relying on one identical questionnaire for every unit.

Q: What should institutions do when different feedback channels seem to contradict each other?

A: Treat contradiction as a signal, not a failure. Different groups often use different routes, and each route has its own participation bias. Compare who is represented, what kind of issue is being raised, and whether the concern sits at unit, course, or institutional level. The answer is usually better triangulation, not forcing everything back into one survey.

Q: Does this paper mean universities should abandon surveys entirely?

A: No. The stronger reading is that universities should stop expecting one end-point survey to do all the work of student voice. Structured surveys still have value, especially when timed well and used alongside open comments, reps, and visible action processes. The broader implication is that student feedback works best as a continuous loop, not a single annual event.

References

[Paper Source]: Steve Briggs, Julie Brunton, Ruki Heritage and Amy McLaughlan "Is it time to move away from end of unit surveys?" DOI: 10.66561/sehej.v7i1.1380

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.