Student Voice

Student Feedback on Flipped Teaching

By Daniel Johnston

In the IEEE’s Transactions on Education journal, in August 2020, Lucas Gren published his research building on the concept of the “flipped classroom” 1. In the context of this particular publication, Gren cites an earlier review of the field which defines this concept as “an educational technique that consists of two parts:

  1. the interactive group learning activities inside the classroom and
  2. direct computer-based individual instruction outside the classroom”.

As the name implies, this model of learning and teaching turns the traditional tropes of education on their heads.

Gren’s interest in this approach was piqued when, as a new PhD student in 2013, he took on various teaching responsibilities at Chalmers University of Technology. After a significant reduction in the student evaluation scores for one particular class, Gren concluded that this was due to inadequate teaching techniques (rather than his fresh arrival to the role). Given the shortage of studies into the effectiveness of the “flipped classroom” approach, Gren committed to adding to the existent understanding of the concept.

Before implementing “the flip”, Gren’s class adopted traditional lecturing (for up to 50 learners). For clarity, when referring to “traditional lecturing”, we mean that the lecturer narrated PowerPoint-based presentations and would occasionally answer questions via the blackboard (with learners being responsible for taking notes). This group, under these conditions - i.e. the 2014 class cohort – is what the publication uses as a control group. Note that a “blended learning” model was used in this control-case as well as in the flipped-case: i.e. students had access to the source textbooks, lecture PowerPoints, and related research articles via an online learning environment.

Post-flip – the 2nd, 3rd, and 4th years of the experiment – the mode of delivery was drastically different. Once the students had familiarised themselves with the materials provided beforehand, lectures then consisted of teachers leading an active examination of this. For consistency, these new lectures took place in the same time slots as the previous (un-flipped) case. In this self-directed stage, the teacher provided a 10-20min video lecture (either made exclusively for this class or else found online); the students then had to answer a multiple-choice quiz based on its content. Concerning in-class activities, the typical format included – but was not limited to – the following (paraphrased from Gren’s publication).

  • Introduction in which students have a few minutes to discuss (in pairs) the online resources for that lesson.
  • A class-wide discussion on the online resources, often incorporating a short, open-ended question relating to what could be included/excluded/clarified.
  • Miscellaneous admin relating to lab work/reports.
  • Group discussions on video lecture content in which students were offered alternative perspectives by the in-class teacher(s).
  • The teacher(s) would share a statement/problem which challenged a particular point of understanding in the subject, and the students would engage in a process coined “think-pair-share”.
  • One or more worked-examples were presented with the students being asked to engage via digital-means (i.e. “voting” for the correct answer/approach).
  • Students were asked to design appropriate methods of investigating a given phenomenon in groups.

Note that this work is not alone in the conclusions it draws: an increasing number of experiments (with varying approaches) have been reaching similar, supporting observations since 2013. The consensus is that the flipped classroom model is a markèd improvement. The only significant variation in these conclusions is on the degree to which the new approach improves student satisfaction and performance.

Gren evaluated the results of his experiment by two measures: exam performance and student feedback. Firstly, considering the former, the exams contained a wide variety of question formats (as had also been the case in previous years). These questions ranged from multiple-choice to open- ended essay-style questions. While the experiment saw no measurable improvement for exam results after the 1st year of the flipped approach, the improvement in further years was significant.

Now considering the student satisfaction measure, Gren notes significant praise for the flipped approach in 2016 (the same year of significant exam result improvement). While 2015-17 show students’ preference for the flipped approach, this is a considerably less reliable measure due to the lack of response to the feedback questionnaires: 50% in 2014; 45% in 2015; 26% in 2016; and, 36% in 2017. While Gren, justifiably, sees the results of this survey as positive, he also draws attention to the need for consistent and rigorous training for teaching staff adapting to this new model. Furthermore, the author also notes that the teaching staff involved in the class was not constant over the years considered. For these reasons (and other statistical concerns) Gren determines that there was no “clear effect on students’ perception” of the class.

While there are some inconsistencies in this particular methodology (changing teaching staff/students, small control group, etc.), the improvement in attainment is a clear motive for more attention being dedicated to this new model of teaching. As Gren highlights, however, the drive for progress via such endeavours cannot solely rely on existing teaching staff: they have to be supported. But this support has to go beyond resources; it has to include rigorous training in the delivery of such promising approaches. If this is realised, the impact of lessons could extend far beyond the classroom.


[Source Paper] L. Gren, "A Flipped Classroom Approach to Teaching Empirical Software Engineering," in IEEE Transactions on Education, vol. 63, no. 3, pp. 155-163, Aug. 2020.
DOI: 10.1109/TE.2019.2960264

Related Entries