Student evaluations improve when staff and students redesign them together

Updated Apr 01, 2026

At Student Voice AI, we spend a lot of time thinking about the quality of the questions universities ask students, not just the quality of the answers they get back. That is why Erin M. Buchanan, Jacob F. Miranda and Christian Stephens' Assessment & Evaluation in Higher Education paper, "Redesigning student evaluations of teaching: integrating faculty and student perspectives", is useful for UK institutions. It treats student evaluations of teaching as instruments that need deliberate design, evidence, and stakeholder input if they are going to produce feedback teams can trust and act on.

Context and research question

Many universities still inherit module evaluation forms that have evolved through habit rather than design. Questions are added over time, local priorities shift, and institutions end up relying on instruments that may mix together teaching quality, workload, course organisation, challenge level, and the wider learning environment. That creates a familiar problem for UK higher education teams: the survey runs every term, but the results are difficult to interpret and even harder to turn into clear action.

This paper addresses that problem directly. Buchanan, Miranda and Stephens report a six-year, six-stage project to redesign student evaluations of teaching by integrating faculty perspectives, student perspectives, and policy recommendations. The practical question is straightforward and important: if universities want better evaluation data, what should they ask students, and where do staff and students agree or disagree on what effective teaching looks like?

Key findings

The paper frames evaluation redesign as a collaborative process, not a technical clean-up exercise. Rather than assuming the existing form was good enough, the authors brought together multiple sources of evidence over time. That matters because evaluation instruments often fail quietly: they keep producing scores, even when those scores are too blunt or internally muddled to guide improvement well.

There was meaningful overlap between faculty and student views of effective teaching. The study found agreement on several characteristics that should sit close to the centre of an evaluation instrument, including communication, commitment, respect, course preparation and organisation, and passion for teaching.

"faculty and students agreed on ... communication, commitment, respect, course preparation and organization, and passion for teaching."

The most useful tensions appeared where staff and students did not see things the same way. According to the abstract, the clearest differences concerned difficulty or rigour and the learning environment. For institutions, that is not a reason to dismiss either perspective. It is a signal that some items may be carrying more than one meaning, or that respondents are judging different aspects of the student experience when they answer the same question.

The wider implication is that better evaluation design depends on separating constructs more carefully. If one question is really capturing challenge, another is capturing classroom conditions, and a third is capturing teaching quality, universities should not collapse those issues into a single story about lecturer performance. This is especially relevant in UK higher education, where module evaluations, NSS-style questions, and internal dashboards can all encourage over-interpretation of simple averages unless the instrument itself is well designed.

Practical implications

First, universities should treat evaluation redesign as a co-design task. Bring students, academic staff, and professional services teams into the same process, review which questions are actually used in decision-making, and remove or rewrite items that blend together distinct ideas. If a survey is meant to support enhancement, each question should point towards a plausible action.

Second, institutions should be cautious with items about challenge, rigour, and the learning environment. Those themes matter, but they are not interchangeable with teaching quality. Where disagreement is predictable, it makes sense to ask more targeted questions and to interpret the responses alongside contextual data on assessment patterns, room conditions, timetabling, or delivery mode.

Third, this is exactly where open-text comment analysis becomes valuable. A revised questionnaire can tell you what to ask, but free-text responses explain why students answered as they did. For Student Experience teams, that means pairing scale items with well-designed open prompts and then analysing those comments systematically. Student Voice Analytics fits naturally here: it helps universities group written feedback on clarity, organisation, respect, workload, and learning environment at scale, so survey redesign leads to clearer evidence rather than just a cleaner form.

FAQ

Q: How should a UK university apply these findings when redesigning module evaluations?

A: Start by auditing the current form against its actual purpose. Identify which questions are used for enhancement, which are used for assurance, and which no one acts on. Then run a structured co-design process with students and staff, pilot revised questions in a limited setting, and compare the results with open-text responses before rolling changes out more widely.

Q: What should institutions do when students and staff disagree about rigour or the learning environment?

A: Treat that disagreement as diagnostic evidence, not noise. It usually means the survey item is bundling together multiple ideas or that different groups are interpreting the same prompt differently. The answer is not to force consensus, but to sharpen the wording, separate the constructs, and use comments or follow-up qualitative work to understand what respondents mean.

Q: What does this change about how universities should think about student voice?

A: It suggests that student voice is not only about collecting more responses. It is also about asking better questions and creating better routes from feedback to action. When evaluation forms are designed with students rather than simply imposed on them, institutions are more likely to gather evidence that is credible, explainable, and genuinely useful for improving teaching and the wider student experience.

References

[Paper Source]: Erin M. Buchanan, Jacob F. Miranda and Christian Stephens "Redesigning student evaluations of teaching: integrating faculty and student perspectives" DOI: 10.1080/02602938.2025.2479117

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.