Lecturer rapport matters more than GenAI use for student learning

Updated Apr 24, 2026

Universities are under pressure to show they are using Generative AI thoughtfully in teaching. At Student Voice AI, we see the same tension in AI-related comments: institutions may want to show innovation, but students still judge whether teaching feels clear, credible, and human. That is why Maja Šerić's Innovations in Education and Teaching International paper, "Does GenAI truly support student learning? Examining the impact of lecturers’ pedagogical vs. technological skills", published online on 23 April 2026, matters. For teams following recent evidence on student experiences of GenAI in UK universities, it adds a useful warning: visible AI adoption does not automatically translate into better student learning.

Context and research question

As more universities pilot AI-supported teaching, staff guidance often concentrates on capability, efficiency, and acceptable use. Those questions matter, but they can obscure a more practical issue for UK Student Experience teams and PVCs: what actually drives students' sense that teaching is helping them learn? If students are judging AI-supported teaching through the same lens they use for feedback, support, and teacher presence, institutions need evidence that separates tool use from teaching quality.

Šerić addresses that question through an empirical study with students at a Spanish public university, using Partial Least Squares Structural Equation Modelling to examine how lecturer clarity, lecturer expertise, lecturer-student rapport, and lecturer use of GenAI for teaching and learning relate to student learning. The setting is not UK-based, but the transferability is strong. UK institutions are asking similar questions in module evaluations, AI pilots, and wider survey work: does visible AI use improve the student experience, or does it only help when the underlying pedagogy is already strong?

Key findings

Pedagogical skill, not GenAI use, was the stronger driver of student learning. According to the abstract, lecturer clarity, expertise, and rapport all sit on the pedagogical side of the model, and these factors, not technological experimentation in itself, explain the meaningful variation in learning. That matters for UK universities because AI can easily be treated as a marker of innovation even when students are still judging the basics: whether teaching is understandable, credible, and worth trusting.

Lecturer-student rapport stood out as the strongest factor in the model. That is the most practically useful result in the paper, because rapport is often easy to talk about and harder to measure well. Yet it shows up repeatedly in student comments about teaching quality, support, and care.

"lecturer-student rapport emerges as the most influential factor."

For institutions, that is a reminder that students do not experience teaching as a technical delivery system. They experience it as a relationship shaped by responsiveness, presence, tone, and whether staff seem invested in their progress.

The paper also suggests that lecturer GenAI use is not a shortcut to better learning. The abstract states that lecturer use of GenAI for teaching and learning did not significantly enhance student learning. That does not mean GenAI has no place in teaching. It means the value is conditional. If AI use weakens clarity, makes teaching feel generic, or creates distance between lecturer and student, the pedagogical gain may be negligible. That fits a wider pattern in recent research showing that students use Generative AI for feedback, but trust teachers more when judgement and academic stakes rise.

The broader implication is that students appear to reward human teaching qualities before technological fluency. Expertise still matters. Clarity still matters. Rapport matters most. For UK higher education teams, that is a more actionable frame than a simple pro-AI or anti-AI reading. It suggests the right question is not "Are staff using GenAI?" but "Does any use of GenAI preserve the relational and pedagogical qualities students associate with good teaching?"

Practical implications

For UK universities, the first implication is to stop treating lecturer AI use as a proxy for teaching quality. If institutions want meaningful evidence from module evaluations or AI pilots, they should ask separately about clarity, expertise, rapport, and the perceived role of AI in teaching. That makes it far easier to see whether students are responding to improved pedagogy, technological novelty, or a loss of teacher presence, which gives teams a clearer basis for action.

Second, institutions should collect open-text feedback that lets students explain what AI-supported teaching feels like in practice. A scaled item can show whether students approve or disapprove. It cannot show whether they think AI use saves time but weakens responsiveness, or helps with structure but reduces trust. This is where Student Voice Analytics fits naturally: it helps universities separate recurring themes around rapport, clarity, expertise, and AI use across large comment sets, using a workflow closer to our NSS open-text analysis methodology than to ad hoc reading of a few comments. The benefit is better diagnosis before institutions rewrite policy or staff guidance.

Third, universities should make the human boundary around AI use more visible. If staff are using GenAI to prepare materials, generate examples, or support low-stakes teaching tasks, students need to know where that support begins and where academic judgement remains firmly human. Pair that communication with a clear review process for AI-related comments and a documented method such as a student comment analysis governance checklist. That reduces avoidable distrust and gives UK teams more defensible evidence when AI use becomes contentious.

FAQ

Q: How should a university apply this paper when reviewing AI use in teaching?

A: Start by reviewing existing module evaluation or AI pilot questions. If they only ask whether AI was useful, they are too blunt. Add separate prompts on lecturer clarity, expertise, rapport, and whether AI use felt appropriate or over-relied upon. Then include one open-text question so students can explain what made the teaching feel more or less supportive.

Q: What should readers keep in mind about the methodology?

A: This is a single-institution study from a Spanish public university, analysed with PLS-SEM. That makes it useful for testing relationships between teaching factors and reported learning, but it is not a sector-wide benchmark for the UK. The accessible abstract also gives less detail than a full methods section would, so UK teams should use the findings as a strong directional signal and test the same questions in their own evaluation data.

Q: What does this change about student voice work more broadly?

A: It reinforces that student voice on AI should not be reduced to adoption or enthusiasm. Universities need to understand how students interpret the human qualities of teaching when AI becomes more visible. In practice, that means comments about rapport, care, clarity, and trust may be just as important as comments about the tool itself.

References

[Paper Source]: Maja Šerić "Does GenAI truly support student learning? Examining the impact of lecturers’ pedagogical vs. technological skills" DOI: 10.1080/14703297.2026.2662590

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.