Lexicon and Software Choice in Education Text Analysis

Updated Mar 11, 2026

If you only look at scores, you miss the reasons behind them. At Student Voice, we believe students' free-text comments give lecturers and university leaders the detail they need to spot underlying issues and identify practical areas for improvement.

That does not mean qualitative data should replace quantitative measures. Used together, the two create a fuller picture of what students are experiencing and what institutions need to act on.

This paper explores how universities can make student feedback easier to interpret. It follows one institution's use of automated text analysis software to examine comments from standardised unit evaluations, with the goal of helping academics understand what students are saying at scale.

The initiative aimed to build a more systematic way to evaluate learning and teaching quality. In practice, that meant producing faster, more thorough interpretations of qualitative feedback from standardised surveys.

Large volumes of student comments are hard to evaluate consistently. In response, the study trialled a text analysis tool on comments from selected units with very large enrolments, using a descriptive approach to surface contextual issues, identify key themes, and test a workable process.

The main benefit of text analysis is speed with structure. It helps academics group comments by theme and understand recurring issues quickly, although automatic classification can still miss nuance and reduce representativeness.

At its core, text analysis looks for patterns in what students say about their university experience. That makes software choice in education text analysis crucial, because different tools handle text in different ways and support different kinds of analysis.

"The software packages are not easily interchangeable due to the different ways in which they work with text data. The decision on which package to use would depend on the nature of the research being undertaken."

The authors conclude that choosing a tool becomes harder as the data grows more complex and varied. Their key point is useful: start with what you need the analysis to deliver, then assess which tool best fits that job.

The study also supports a wider finding in the literature: automated text analysis can add real value when institutions need to review large survey datasets at scale. For units with high enrolments, it can make student comments far more usable.

Even so, faster analysis does not automatically produce more meaningful reporting. A standardised dictionary can flatten student comments into broad categories, so the lexicon and taxonomy need regular refinement if they are going to stay accurate, detailed, and relevant.

Overall, the paper suggests that text analysis software can help institutions build a fuller picture of students' learning experiences and their suggestions for improving teaching. The payoff is not just efficiency, but better evidence for course enhancement.

The introduction of text analysis for qualitative data also gives stakeholders a more practical way to support quality assurance and enhancement. Further work is still needed to test how useful these reports are for different stakeholders across the institution.

For us, the strongest takeaway is that education text analysis works best when the software and the lexicon are designed around the questions institutions actually need to answer. That is essential if you want student comments to support quality assurance, enhancement, and action.

FAQ

Q: How does student voice through qualitative feedback specifically contribute to improving teaching practices?

A: Qualitative feedback tells you why students respond as they do. Comments about lecture clarity, course materials, or interactive sessions help lecturers see what is working, what is not, and where targeted changes will have the biggest effect. Text analysis then helps teams spot those themes across large volumes of feedback, so student voice can shape teaching practice and curriculum development at scale.

Q: What are the challenges involved in implementing automated text analysis for student feedback, and how can they be addressed?

A: The main challenge is accuracy. Automated systems can struggle with context, sentiment analysis, and the variation in student language, which can distort the picture if the lexicon or model is too broad. Institutions can reduce that risk by refining dictionaries over time, testing outputs against real comments, and combining automation with human review where nuance matters most.

Q: How can universities ensure that the integration of student voice through text analysis remains inclusive and representative of all student groups?

A: Institutions need both inclusive collection and careful analysis. That means encouraging feedback from diverse student groups, offering multiple channels, protecting anonymity, and checking whether the tool or lexicon is overlooking particular voices. Regular reviews of the methodology help ensure emerging issues, and the language different groups use to describe them, are captured properly.

References

[Paper Source]: Elizabeth Santhanam, Bernardine Lynch and Jeffrey Jones "Making sense of student feedback using text analysis – adapting and expanding a common lexicon"
DOI: 10.1108/QAE-11-2016-0062

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.