Students judge teaching quality through expertise, care, and inspiration

Updated Apr 22, 2026

A strong teaching score can hide three very different student judgments. When universities ask students to rate teaching quality, they often assume everyone is judging the same thing. Adrian Lundberg and Martin Stigmar's Studies in Higher Education paper, "Rethinking teacher quality in Swedish higher education: insights from social sciences student perspectives and Q methodology", shows that they are not. Drawing on post-pandemic perspectives from Swedish social sciences students, the paper identifies three coherent ways students recognise a high-quality university teacher. For institutions using student voice in module evaluations, teaching reviews, and enhancement planning, that matters because one headline score can collapse very different expectations into a single signal.

Context and research question

Teacher quality is a familiar phrase in higher education, but it often hides a methodological problem. Much of the literature and many institutional frameworks define quality from the perspectives of academics, managers, or policy systems, a tension that also appears in research on students and educators prioritising different things in digital assessment quality. Students are usually asked to rate the result, not to explain what they think a high-quality teacher actually is.

This paper addresses that gap in a way that is unusually useful for UK higher education teams. Lundberg and Stigmar used Q methodology with 41 social sciences students from different Swedish universities. Participants sorted 43 statements about teacher quality drawn from earlier research, then explained the thinking behind their choices. The aim was not to produce one average definition, but to identify shared patterns in how students weigh different aspects of good teaching. That makes the paper especially relevant for universities that rely on teaching evaluation surveys, awards, or quality processes that still treat "teaching quality" as a single construct. The practical takeaway is clear: if your survey treats teaching quality as one broad idea, you may not know what students are actually rewarding.

Key findings

The first finding is that students did agree on some fundamentals. Across the three viewpoints, students consistently valued clear communication, coherent course design, and fairness in grading. For UK teams, that shared baseline matters because it gives survey design and reporting a more stable starting point, even when students disagree on the rest.

"Across all three perspectives, students showed strong consensus on the importance of clear communication, coherent course design, and fairness in grading."

The second finding is that one strong student viewpoint prioritised expertise, structure, and transparent assessment above all else. The paper presents this group as the "academic purist". These students saw high-quality teachers as disciplinary experts who know their subject, organise learning coherently, and make expectations explicit. They were much less interested in flexibility, emotional support, or broad student influence over the learning process. For universities, that matters because some positive evaluations may be driven mainly by confidence in structure and standards, not by rapport or innovation, which helps explain patterns in what students are saying about teaching staff in social sciences.

A different student viewpoint centred inclusion, trust, and psychological safety. The "inclusive collaborator" valued teachers who foster respectful discussion, prevent exclusion, and make the classroom feel fair and safe. In this account, good teaching is inseparable from relational ethics. That has direct implications for evaluation design, because a broad "teaching quality" item can hide whether students are responding to subject expertise, inclusive practice, or both.

A third viewpoint emphasised inspiration, intellectual challenge, and emotional engagement. The "inspired thinker" wanted teaching to feel alive, stimulating, and meaningful. These students valued curiosity, humour, critical thinking, and the sense that university learning should go beyond passing the next assessment. In practice, this means some students may rate a teacher highly because the teaching stretches and energises them, even when that same course feels demanding or less personal.

The broader conclusion is that student views of quality are coherent, but plural. The paper argues that universities should stop assuming one institutional definition of high-quality teaching will capture what students actually notice. If students are weighing expertise, inclusivity, and inspiration differently, then one overall teaching score is doing too much interpretive work on its own. The benefit of recognising that plurality is better interpretation, better follow-up questions, and better decisions about what to improve.

Practical implications

For UK universities, the first implication is to break teaching quality into clearer dimensions when collecting feedback. Instead of relying heavily on one broad item, evaluation forms should distinguish between clarity and organisation, fairness of assessment, inclusive classroom climate, and intellectual challenge. That kind of redesign is stronger when students and staff help design teaching evaluation surveys, because the categories then reflect what students actually mean rather than what institutions assume they mean. The benefit is cleaner evidence for enhancement work and fewer debates about what a headline score really captured.

Second, universities should treat free-text comments as essential for interpretation, not as optional colour. The same overall rating can hide very different student expectations: one cohort may want better structure, another may want stronger psychological safety, and another may want more stimulating teaching. This is where Student Voice Analytics fits naturally. Grouping comments into themes such as clarity, fairness, belonging, and challenge gives programme leaders a more defensible basis for action than headline averages alone. The benefit is a more precise starting point for course review, staff development, and survey redesign.

Third, institutions should review how teaching awards, merit systems, and quality processes define excellence. If frameworks reward only one teaching persona, they risk overlooking other forms of value that students clearly recognise. That is especially important where evaluation data feeds into decisions about recognition or performance, because behaviour-focused evaluation questions can reduce gender bias in student feedback only if the wider evaluation framework is also explicit about what it is trying to capture. That evidence is also easier for staff to use when institutions build in dialogue, as shown in our summary of how student evaluations help teaching improve when staff can discuss them. The benefit is fairer, more balanced use of student evidence in recognition, review, and development processes.

The final lesson is practical rather than philosophical. Universities do not need three separate systems for judging teachers, but they do need to recognise that student evidence is more plural than many dashboards imply. When institutions analyse that plurality properly, feedback becomes easier to interpret and more useful for improvement. If you need to separate those signals in open comments at scale, Student Voice Analytics gives teams a governed route from free text to evidence for course review, staff development, and enhancement planning.

FAQ

Q: How should a university redesign module evaluation questions after reading this paper?

A: Keep a short core, but make the dimensions explicit. Ask separately about clarity of communication, fairness and transparency, inclusive learning climate, and intellectual challenge, then add one open-text prompt asking what most shaped the student's judgment. A structured approach such as the NSS open-text analysis methodology is useful here because it helps teams see which dimension students are actually describing, rather than treating all positive or negative comments as the same signal. Teams should also watch for non-response bias in student evaluations so those clearer dimensions still reflect the full cohort.

Q: What does Q methodology add here, and what are the limits?

A: Q methodology is designed to identify shared viewpoints rather than calculate a population average. In this study, 41 students ranked 43 statements, and factor analysis grouped those rankings into three distinct perspectives. That makes the method strong for showing how students think, but not for claiming that every UK cohort will contain the same proportions of these viewpoints. The Swedish social sciences context also limits direct generalisation, so UK teams should treat the findings as a strong interpretive framework rather than a sector benchmark.

Q: What does this change about student voice practice more broadly?

A: It changes the job of interpretation. Student voice is often treated as if disagreement in comments or ratings is just noise. This paper suggests the opposite: variation can reflect legitimate differences in what students count as good teaching. Universities therefore get better evidence when they analyse comments and scores as signals of different expectations, not just as agreement, noise, or dissatisfaction.

References

[Paper Source]: Adrian Lundberg and Martin Stigmar "Rethinking teacher quality in Swedish higher education: insights from social sciences student perspectives and Q methodology" DOI: 10.1080/03075079.2026.2651935

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.