Updated Apr 10, 2026
Bad evaluation questions produce bad teaching data. Erin M. Buchanan, Jacob F. Miranda and Christian Stephens' Assessment & Evaluation in Higher Education paper, "Redesigning student evaluations of teaching: integrating faculty and student perspectives", is useful for UK institutions because it shows how evaluation forms improve when staff and students help design them together. The paper treats student evaluations of teaching as instruments that need deliberate design, evidence, and stakeholder input if they are going to produce feedback teams can trust and act on.
Many universities still inherit module evaluation forms that have evolved through habit rather than design. Questions are added over time, local priorities shift, and institutions end up relying on instruments that may mix together teaching quality, workload, course organisation, challenge level, and the wider learning environment. That creates a familiar problem for UK higher education teams: the survey runs every term, but the results are hard to interpret and even harder to turn into clear action, which is why using student evaluation data more effectively in UK higher education matters.
This paper addresses that problem directly. Buchanan, Miranda and Stephens report a six-year, six-stage project to redesign student evaluations of teaching by integrating faculty perspectives, student perspectives, and policy recommendations. The practical question is straightforward and important: if universities want more useful evaluation data, what should they ask students, and where do staff and students agree or disagree on what effective teaching looks like?
The paper frames evaluation redesign as a collaborative process, not a technical clean-up exercise. Rather than assuming the existing form was good enough, the authors brought together multiple sources of evidence over time. That matters because evaluation instruments often fail quietly: they keep producing scores even when those scores are too blunt or internally muddled to guide improvement well. For institutions, the takeaway is simple: redesign is strongest when it is treated as shared evidence work, not a form-filling exercise.
There was meaningful overlap between faculty and student views of effective teaching. The study found agreement on several characteristics that should sit close to the centre of an evaluation instrument, including communication, commitment, respect, course preparation and organisation, and passion for teaching. That gives universities a more defensible core set of teaching-related items when they review or rebuild an evaluation form.
"faculty and students agreed on ... communication, commitment, respect, course preparation and organization, and passion for teaching."
The most useful tensions appeared where staff and students did not see things the same way. According to the abstract, the clearest differences concerned difficulty or rigour and the learning environment. For institutions, that is not a reason to dismiss either perspective. It is a signal that some items may be carrying more than one meaning, or that respondents are judging different aspects of the student experience when they answer the same question. In practice, disagreement can help teams spot where a single survey item is doing too much work, which is also the logic behind the case for free-text comments in module evaluation.
The wider implication is that better evaluation design depends on separating constructs more carefully. If one question is really capturing challenge, another is capturing classroom conditions, and a third is capturing teaching quality, universities should not collapse those issues into a single story about lecturer performance. This is especially relevant in UK higher education, where module evaluations, NSS-style questions, and internal dashboards can all encourage over-interpretation of simple averages unless the instrument itself is well designed. Clearer constructs lead to cleaner reporting, more credible follow-up conversations, and better-targeted action.
For UK higher education teams, three practical actions follow from the paper.
First, universities should treat evaluation redesign as a co-design task. Bring students, academic staff, and professional services teams into the same process, review which questions are actually used in decision-making, and remove or rewrite items that blend together distinct ideas. That is easier when teams understand what motivates students to take part in teaching evaluations, not just what they say once they respond. If a survey is meant to support enhancement, each question should point towards a plausible action.
Second, institutions should be cautious with items about challenge, rigour, and the learning environment. Those themes matter, but they are not interchangeable with teaching quality. Where disagreement is predictable, it makes sense to ask more targeted questions and to interpret the responses alongside contextual data on assessment patterns, room conditions, timetabling, or delivery mode. That gives teams a better chance of fixing the right problem rather than over-reading a single score.
Third, this is exactly where open-text comment analysis becomes valuable. A revised questionnaire can tell you what to ask, but free-text responses explain why students answered as they did. For Student Experience teams, that means pairing scale items with well-designed open prompts and then analysing those comments systematically. Student Voice Analytics fits naturally here: it helps universities group written feedback on clarity, organisation, respect, workload, and learning environment at scale, so survey redesign leads to clearer evidence rather than just a cleaner form.
Q: How should a UK university apply these findings when redesigning module evaluations?
A: Start by auditing the current form against its actual purpose. Identify which questions are used for enhancement, which are used for assurance, and which no one acts on. Then run a structured co-design process with students and staff, pilot revised questions in a limited setting, and compare the results with open-text responses before rolling changes out more widely. After rollout, institutions still need structured staff discussion so evaluation findings can be interpreted and used well. That sequence makes it easier to spot weak items before they shape institution-wide reporting.
Q: What should institutions do when students and staff disagree about rigour or the learning environment?
A: Treat that disagreement as diagnostic evidence, not noise. It usually means the survey item is bundling together multiple ideas or that different groups are interpreting the same prompt differently. The answer is not to force consensus, but to sharpen the wording, separate the constructs, and use comments or follow-up qualitative work to understand what respondents mean. Done well, that turns disagreement into a design clue rather than a reporting headache.
Q: What does this change about how universities should think about student voice?
A: It suggests that student voice is not only about collecting more responses. It is also about asking better questions and creating better routes from feedback to action. When evaluation forms are designed with students rather than simply imposed on them, institutions are more likely to gather evidence that is credible, explainable, and genuinely useful for improving teaching and the wider student experience. That makes student voice easier to defend in both enhancement and assurance conversations.
[Paper Source]: Erin M. Buchanan, Jacob F. Miranda and Christian Stephens "Redesigning student evaluations of teaching: integrating faculty and student perspectives" DOI: 10.1080/02602938.2025.2479117
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.