Students judge AI-using teachers by care, not just technical competence

Updated Apr 10, 2026

Students may accept teachers using Generative AI, but they do not judge it on novelty alone. They judge it by whether the teacher still seems present, responsible, and genuinely invested in their learning. That is why Bodong Yang's recent Studies in Higher Education paper, "'If GAI has finished her job, why would I need her?': a mixed-methods study on students' perceptions of GAI-using teachers", matters. For universities collecting module evaluation comments, AI pilot feedback, and wider student experience data, the paper surfaces a practical question: when teachers use Generative AI, do students still feel taught, supported, and cared for by a human expert?

Context and research question

Much of the current AI debate in higher education focuses on students using these tools, and on wider equity challenges and opportunities in AI and education. That matters, but it is only half the picture. Universities are also starting to use Generative AI in teaching preparation, feedback, and course delivery. Once that happens, student voice work needs to ask a different question: how do students interpret staff use of AI, and what does that do to trust?

Yang addresses that question through a sequential explanatory mixed-methods design. In the quantitative phase, 422 university students were randomly assigned to evaluate one of three teacher types: a teacher who fully used GAI, a teacher who collaborated with GAI, and a teacher who did not use GAI. The study then examined perceived teacher care. A qualitative follow-up with 12 participants added the nuance behind those judgments. For UK higher education teams, that makes the paper especially useful. It moves beyond abstract opinion polling and tests how students respond to distinct patterns of AI use in teaching.

Key findings

The headline finding is clear: students saw teachers who fully used GAI, and teachers who collaborated with GAI, as less caring than teachers who did not use GAI. That matters because perceived care is not a soft extra. It shapes whether students experience teaching as responsive, accountable, and worthy of trust. That sits neatly beside recent evidence that students use Generative AI for feedback, but trust teachers more when judgement and stakes increase.

The abstract states the core result plainly:

"students perceived teachers who fully utilize GAI and those who collaborate with GAI as significantly less caring"

That finding should make universities pause before treating AI adoption as a neutral efficiency gain. A tool may save staff time, but if it weakens students' sense that a teacher is present, attentive, and professionally invested, the institution may be exchanging labour savings for relational cost.

The qualitative phase adds a more useful nuance than a simple pro-AI or anti-AI reading. Students judged teachers who fully relied on GAI negatively because that level of use appeared to contradict what they believe a university teacher is for. In students' eyes, excessive AI use risked hollowing out the human responsibilities of teaching: judgment, interpretation, care, and intellectual leadership. In other words, the problem was not only the technology. It was the perception that the teacher had stepped back too far.

Students were more open to teachers who used GAI critically and visibly, rather than automatically. The paper suggests that collaboration with AI can still be read more positively when teachers remain clearly responsible for the teaching, show openness to new tools, and use GAI with discernment rather than deference. That is an important distinction for institutions designing AI guidance. Students are not demanding a return to pre-AI teaching. They are asking to see that the teacher is still doing the irreplaceable human work.

The study also shows that complete non-use is not a perfect answer. Students respected teachers who did not use GAI, but some also saw them as somewhat stubborn in the GAI age. That matters because it warns against treating blanket refusal as automatically reassuring. Students appear to want something more balanced: human judgment first, but not wilful technological avoidance.

Taken together, the findings suggest that students assess teachers' AI use through a relational lens. They are not asking only whether GAI is efficient, innovative, or technically competent. They are asking whether its use changes teacher care, authenticity, flexibility, and responsibility. For universities trying to interpret AI-related student comments, that is the practical takeaway.

Practical implications

For UK universities, the first implication is to ask more precise questions about staff AI use. A generic item such as "AI is used appropriately on my course" is too blunt to guide action. Student Experience teams need to know whether students think AI is helping with clarity, accessibility, and responsiveness, or whether it is making teaching feel automated, distant, or less accountable.

Second, institutions should distinguish between types of teacher AI use when collecting student feedback. Students may respond differently to AI used for administrative drafting, formative feedback support, lecture preparation, or direct teaching interaction. If those uses are collapsed into a single category, the data becomes much less useful, especially if teams want student voice in assessment and feedback to produce usable evidence for course design. This is where structured open-text analysis helps: comments can be separated into themes such as care, responsiveness, expertise, authenticity, fairness, and trust.

Third, universities should not treat concern about staff AI use as simple resistance to innovation. This paper suggests that negative reactions often express a deeper expectation about what a teacher owes students. When students describe AI-heavy teaching as uncaring, they are making a statement about responsibility, not merely about technology preference. Student Voice Analytics is relevant here because it can help institutions categorise those signals across large volumes of module comments, AI pilot feedback, and pulse surveys, rather than relying on anecdote or a few loud responses.

Finally, the policy lesson is to make human oversight visible. If staff use GAI, students need to understand where that use begins, where it stops, and how academic judgment remains human. The more clearly institutions communicate that boundary, the less likely students are to interpret AI use as professional withdrawal.

FAQ

Q: How should universities ask students about teachers' use of Generative AI in a way that produces actionable evidence?

A: Break the issue into specific prompts. Ask separately about trust, care, clarity, fairness, and whether AI use feels appropriately supervised by the teacher. Then add one open-text question such as "When staff use AI in teaching or feedback, what feels helpful and what feels concerning?" That makes it easier to separate objections to poor implementation from broader anxieties about teacher presence or judgment.

Q: What should universities keep in mind before generalising from this study?

A: This is one mixed-methods study, with 422 students in the quantitative phase and 12 participants in the qualitative follow-up. It is most useful as a strong directional signal, not a universal rule. The practical response is to test the same questions in local contexts, especially in courses already piloting AI-supported teaching or feedback, and then compare the findings with actual student comments.

Q: What does this change about student voice work in higher education?

A: It widens the frame. Student voice on AI should not focus only on whether students use the technology themselves. Universities also need to understand how students interpret staff use of AI, because that affects trust in teaching, feedback, academic relationships, and what students count as teaching excellence. Open comments are especially valuable here, because they capture whether students see AI use as thoughtful support, professional laziness, or something in between.

References

[Paper Source]: Bodong Yang "'If GAI has finished her job, why would I need her?': a mixed-methods study on students' perceptions of GAI-using teachers" DOI: 10.1080/03075079.2026.2648696

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.