Peer assessment in motivating student team-based activities

By David Griffin

Published Nov 11, 2022 · Updated Mar 13, 2026

Virtual team projects can expose students to wider perspectives, but they can also make free-riding harder to spot. A 2021 study of the X-Culture International Consulting Competition suggests that regular peer assessment can increase effort from lower-contributing students before group work starts to fail.

Virtual communication now sits at the centre of education, business, and social life. As digital tools have improved, Global Virtual Teams (GVTs), groups whose members work across countries and time zones, have become more common in higher education. In team-based activities, GVTs give students opportunities to collaborate with peers from different cultural and national backgrounds, which can make group work feel more realistic and more relevant to modern workplaces.

The challenge is accountability. Distance can make collaboration harder, and some students contribute far less time and effort than their teammates. This free-riding problem can be especially acute in GVTs because remote communication can reduce the social pressure to contribute consistently (Taras et al, 2018). For educators, that raises a practical question: can peer assessment give students a stronger reason to stay engaged?

With this in mind, researchers at a Colombian university investigated how peer-assessment scores affected student effort in GVTs (Román-Calderón et al., 2021). They wanted to know whether being graded regularly by teammates during a project would motivate students who were prone to free-riding. The authors used the X-Culture International Consulting Competition to test this idea. This international competition is built into business courses at universities around the world. The rules require the competition score to make up at least 30% of a student's final course grade. Students are placed in virtual teams of six and work through challenges modelled on the modern corporate environment over a ten-week period. That set-up made it possible to observe whether regular peer ratings changed behaviour over time. The competition score was made up of three parts:

  • Peer assessment (PA) scores (20%)
  • Weekly milestone submissions (20%)
  • Final team report (60%)

Each team member rated their teammates weekly on a scale from one (poor) to five (excellent) under the headings of Work Ethic, Helpfulness, Cooperation, and Leadership. Failing to achieve a satisfactory PA score of 2.0 for two consecutive weeks resulted in exclusion from the rest of the competition and a score of zero for that part of the course. In other words, peer judgement carried real consequences, which matters if the aim is to discourage free-riding rather than simply record it.

The findings were striking. Students who received lower PA scores over time tended to increase their effort in later weeks, suggesting that ongoing peer assessment can motivate students who might otherwise contribute less. Students whose PA scores dropped more sharply also showed more abrupt increases in later effort. Across the full competition, only 1.6% of students were expelled from their team for receiving sufficiently low PA scores in consecutive weeks. The practical takeaway is that regular peer assessment may help shift behaviour before non-participation becomes entrenched.

The authors are careful not to overstate the result. They note that the motivational effect may not last over longer periods, and continuous peer evaluation could also damage trust or disrupt normal social dynamics within a team. Although peer assessment may increase individual effort, it could still weaken collaboration if students become defensive or overly cautious. The study also cannot say much about students who were removed from the competition, or about whether occasional contributions from so-called free-riders may sometimes still be valuable. For that reason, peer assessment looks most useful when the goal is to improve accountability in group work, and when it is paired with support for constructive teamwork rather than used as a blunt penalty.

FAQ

Q: How do students perceive the fairness and effectiveness of peer-assessment (PA) scoring in Global Virtual Teams (GVTs)?

A: Student views are likely to be mixed. Some students see PA scoring as a fair way to make contribution visible and hold team members accountable. Others worry about bias, misunderstanding, or the stress of being judged by classmates every week. That is why institutions should not assume the system feels fair simply because it produces usable scores. Gathering student feedback on the process can reveal whether the criteria are clear, whether moderation is needed, and whether the approach is improving motivation without damaging trust.

Q: What strategies can be employed to enhance collaboration and reduce the negative impacts of continuous peer evaluation on social dynamics within GVTs?

A: Clear criteria are the starting point. Students need a shared understanding of what good contribution looks like, plus guidance on how to give constructive feedback. Regular check-ins, short reflection points, and staff oversight can reduce the risk that peer scoring turns into punishment or conflict. The goal is not only to identify weaker contribution, but to maintain a team environment where students can adjust their behaviour and still work together productively.

Q: What role does text analysis play in identifying and understanding patterns of contribution and interaction within GVTs, especially in relation to freeriding behaviors?

A: Text analysis can help institutions move beyond anecdote. By reviewing open-text comments, peer feedback, discussion posts, or team reflections at scale, educators can spot recurring themes such as unclear expectations, unequal workload, disengagement, or strong peer support. That makes it easier to see where free-riding is becoming a pattern, which teams may need intervention, and whether changes to group assessment are improving collaboration over time.

References:

Román-Calderón, JP., Robledo-Ardila, C., Velez-Calle, A. (2021). Global virtual teams in education: Do peer assessments motivate student effort? Studies in Educational Evaluation, 70 (2021) 101021.
DOI: 10.1016/j.stueduc.2021.101021

Taras, V., Tullar, W. L., Liu, Y., Pierce, J. R. (2018). Straight from the horse’s mouth: Justifications and prevention strategies provided by free riders on global virtual teams. Journal of Management and Training for Industries, 5(3), 51–67.
DOI: 10.12792/JMTI.5.3.51

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.