Student Voice

Module Evaluation, Likability and The Case For Free-Text Comments

By Student Voice

This study provides preliminary evidence that suggests the relationship between numerical student evaluations and instructor likability is causal in nature. That is, student evaluations may be an indicator of a student-perceived construct similar to likability. This gives yet more support to the idea that when trying to capture the student voice to improve teaching and the student experience, free-text comments are far more valuable than numerical scale answers.

This study looked at the influence of likability on the student evaluation of teaching. They found that if you know nothing about an instructor or how a class was taught, students' perception of likability accounted for two-thirds of the total variance in evaluations.

module evaluations are designed assuming students are rating their evaluation of teaching based on effectiveness. It has been found however that students will frequently ignore any given question and answer in a manner consistent with some overriding issue or concern

  • Many student module evaluation instruments are not measuring the effectiveness of instruction.
  • Students do not evaluate teaching in a multidimensional fashion.
  • The impression someone has of the instructor, before any instruction or syllabus is presented, is strongly related to how they rate them after a year's worth of interaction

A key point is that the use of module evaluation does not appear to improve teaching as measured by the same module evaluation. The goals of evaluation are not always clear to the target. This is a problem because it means that students may be biased in their responses, which will affect the results of the evaluation.

The statistical analyses show that the survey instruments do not have a relationship with measurable learning.

The results of this study show that the likability of the professor is a strong predictor for self-reported learning and grades. There are many possible explanations for the correlation between instructor evaluations and student perceptions of likability. The research has not been able to show with any certainty what this relationship is due to.

Researchers found that the students' perception of their instructor's likability, personality, and responses on a module evaluation instrument are highly related. This led some researchers to suggest that the evaluations create what could be called a likability scale.

This study has demonstrated that the likability of an instructor is not a simple construct. It is composed of many different components which are related to one another through complex relationships, but are not all directly correlated with the likability of the instructor.

The paradigm shift of understanding that the student evaluation of teaching is a measure of what students like and dislike brings to mind the idea of customer satisfaction surveys.

Module evaluation instruments are measuring what students like and dislike, not necessarily the intended teaching and its effectiveness. The only way to both have more nuanced responses and to be able to interrogate the student voice for actionable insight is to better promote and analyse free-text comments.

FAQ

Q: How can educational institutions effectively analyse free-text comments to extract actionable insights?

A: Educational institutions can effectively analyse free-text comments by using advanced text analysis software that employs natural language processing (NLP) techniques. This technology can identify themes, sentiments, and patterns within the feedback, making it easier to understand the student voice on a larger scale. By categorising comments into different areas, such as course content, teaching methods, or instructor behaviour, institutions can pinpoint specific areas for improvement. Additionally, involving educators in the review process can help ensure that the analysis aligns with educational goals and teaching contexts. Regular training on interpreting student voice through text analysis can also enhance the effectiveness of this approach.

Q: What are the specific challenges associated with interpreting free-text feedback from students, and how can these be addressed?

A: Interpreting free-text feedback from students can be challenging due to the presence of ambiguous language, varying levels of detail, and subjective interpretations. To address these challenges, educational institutions can use a combination of automated text analysis tools and human interpretation. Automated tools can help in managing the volume of data and in identifying broad themes and sentiments. However, human insight is crucial for understanding context, nuances, and the subtleties of student voice. Training staff to recognise and mitigate their biases and to interpret comments within the context of the course and broader educational objectives can also improve the accuracy and utility of the feedback analysis.

Q: How can instructors use the insights gained from free-text comments to improve their teaching methods and student learning outcomes?

A: Instructors can use insights gained from free-text comments to tailor their teaching methods more closely to the needs and preferences of their students, thereby enhancing student learning outcomes. This involves first identifying common themes and concerns in the feedback, such as areas where students feel confused or particularly engaged. Instructors can then adjust their course content, teaching style, or interaction with students based on these insights. For instance, if many students comment on the difficulty of understanding a particular concept, the instructor might introduce more examples, interactive elements, or review sessions focusing on that concept. Engaging with student voice in this way not only helps in improving the teaching approach but also demonstrates to students that their feedback is valued and acted upon, potentially increasing their engagement and investment in the course.

References

[Paper Source]: Dennis Clayson (2022) The student evaluation of teaching and likability: what the evaluations actually measure, Assessment & Evaluation in Higher Education, 47:2, 313-326,
DOI: 10.1080/02602938.2021.1909702

Related Entries