Published May 16, 2022 · Updated Mar 05, 2026
Formative assessment should support learning, but students do not always experience it that way. Weurlander et al. (2012) explore how two different formative assessment methods shape students’ experiences, and how that experience influences the relationship between assessment and learning (Samuelowicz and Bain, 2002).
Assessments can both summarise students’ achievements for an award or certification (summative assessment) and provide feedback that supports learning (formative assessment). Weurlander et al. (2012) highlight that formative assessment is under-theorised, and that more research is needed to understand how different assessment practices enable student learning.
The study argues that assessment should facilitate learning and support students as they develop subject understanding and intellectual abilities. For educators, this means treating assessment as an integral part of teaching and learning, involving students and using authentic, meaningful tasks that encourage a range of assessment forms (Falchikov, 2005), such as peer review feedback in higher education.
Formative assessment can serve different aims, but at its core it generates feedback on students’ performance to improve learning. Weurlander et al. (2012) contribute to this area by exploring students’ experiences of two formative assessment types within a course. The study focuses on: (1) how formative assessment methods can act as tools for learning, and (2) how students experience and perceive these two approaches (Weurlander et al., 2012).
To address these questions, Weurlander et al. (2012) compared two formative assessment types: an individual written assessment with factual questions, and an oral assessment that encouraged students to solve problems in groups (a useful lens is group work assessment best practice). These were introduced as part of a nine-week, lecture-based undergraduate pathology course that also included autopsies, case seminars, and sessions where students discussed microscopic tissue images.
The first formative assessment included around 20 short-answer questions, usually requiring responses of a few words to a few sentences, and mainly emphasised recall of factual knowledge. Examples included: What are the causes of tissue damage due to an inflammatory response? Which factors influence the selection of the target organ during the spread of a tumour from its original site (metastasis)?
For the second formative assessment, students received cards with different pieces of case information, including written patient histories, laboratory tests, printed microscopic images, and surgical specimens. The two approaches were chosen because of their different emphases: the written assessment focussed on right-or-wrong answers, individual performance, and delayed feedback, while the group assessment focussed on understanding and problem-solving, group performance, and immediate feedback (Weurlander et al., 2012).
The individual assessment largely reflects assessment as knowledge control, while the group assessment reflects assessment as learning. Alongside these formative assessments, the course ended with two summative assessments: a group problem-solving assessment and an individual written exam.
This contrast helps educators match assessment design to the type of learning they want to encourage.
The study suggests formative assessment can act as a tool for learning by influencing motivation, helping students become more aware of their learning, and supporting the overall learning process. This was a small-scale study that focussed on students’ experiences of assessment rather than outcomes, but Weurlander et al. (2012) suggest the findings have implications for assessment practice and course design.
Students’ experiences were shaped by the order in which they encountered the assessment methods and by the educational environment that formed the study’s context. For instance, the individual written assessment was less likely to be seen as a successful tool for learning if it was presented later in the undergraduate course. Weurlander et al. (2012) argue that students may view this kind of assessment as an appropriate learning tool when the environment strongly emphasises understanding, problem-solving, and self-regulated learning.
The group assessment, with its emphasis on application and collaborative problem-solving, appeared more transferable and could be used across a variety of educational settings (Weurlander et al., 2012).
From a teaching perspective, using complementary formative assessments throughout a course can help students study more consistently. However, even when students can manage each assessment task on its own, the set of tasks as a whole can feel daunting and demanding, and they may become selective about which tasks to focus on (Lindberg-Sand and Olsson, 2008; Scheja, 2002).
Lastly, assessment task design improves when teachers consciously sequence tasks to facilitate learning in different ways. By combining different assessment tasks and considering the educational and disciplinary context, programmes can support more effective, efficient, and robust assessment practice. Takeaway: combine complementary assessments, plan the sequence, and keep the overall workload manageable so students can engage with all tasks.
Q: How do students' personal and cultural backgrounds influence their perception and effectiveness of different formative assessment types?
A: Students’ personal and cultural backgrounds can shape how they perceive, participate in, and benefit from formative assessments. Attending to student voice in higher education is important here because it helps educators recognise and incorporate diverse perspectives into assessment design and delivery.
Background factors can influence confidence, communication styles, and preferences for individual versus group work. For example, students from cultures that value collective success may find group assessments more meaningful and engaging, while those from cultures that emphasise individual achievement may prefer individual assessments. Recognising these differences can help educators design more inclusive assessments that support learning for a wider range of students.
Q: In what ways can technology be leveraged to enhance the feedback process in formative assessments to better support student learning and engagement?
A: Technology can enhance feedback in formative assessments by making it more immediate, interactive, and personalised. Digital platforms can help educators provide timely, detailed feedback that students can access easily, and tools such as text analysis software for education can highlight patterns in written work and suggest areas to improve.
Online forums and discussion boards can also support continuous peer-to-peer and teacher-to-student feedback, creating an ongoing dialogue around learning. Used well, these tools can strengthen student voice by giving students more ways to engage with feedback, ask questions, and reflect on their progress.
Q: How do different disciplines (e.g., humanities vs. STEM) impact the design and perceived value of formative assessments among students and educators?
A: The design and perceived value of formative assessments vary across disciplines, influenced by the subject matter and learning objectives. In the humanities, formative assessments may lean towards essays, presentations, and discussions that support interpretation, critical thinking, and student voice. In contrast, STEM disciplines may favour tasks that emphasise problem-solving and application, such as lab work, quizzes, and group projects.
Designing assessments that fit the learning objectives of each discipline can improve engagement and learning. Involving students in shaping assessment, regardless of discipline, also helps ensure tasks feel relevant, appropriately challenging, and supportive of learning outcomes.
[Source]
Weurlander, M., Söderberg, M., Scheja, M., Hult, H., & Wernerson, A. (2012). Exploring formative assessment as a tool for learning: students’ experiences of different methods of formative assessment. Assessment & Evaluation in Higher Education, 37(6), 747-760.
DOI: 10.1080/02602938.2011.572153
[1] Falchikov, N. 2005. Improving assessment through student involvement: Practical solutions for aiding learning in higher and further education. New York, NY: Routledge.
ISBN: 9780415308212
[2] Lindberg-Sand, Å., and T. Olsson. 2008. Sustainable assessment?: Critical features of the assessment process in a modularised engineering programme. International Journal of Educational Research 47, no. 3: 165–74.
DOI: 10.1016/j.ijer.2008.01.004
[3] Samuelowicz, K., and J.D. Bain. 2002. Identifying academics’ orientation to assessment practice. Higher Education 43: 173–201.
DOI: 10.1023/A:1013796916022
[4] Scheja, M. 2002. Contextualising studies in higher education. First-year experiences of studying and learning in engineering. PhD diss., Stockholm University.
Doctoral Thesis: Available Here
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.