By Anosh Butt
Deeley and Bovill (2017) discovered that staff-student partnerships are quite uncommon in relation to assessments. While reviewing the literature, the authors accentuate that assessment and feedback are the weakest links in learning and teaching and the most significant contributors to student dissatisfaction (Rust et al., 2005). Students may lack clarity regarding assessment requirements and marking criteria, causing difficulties in understanding the feedback to improve skills applicable to future assignments (Deelay and Bovill, 2017).
Various models have been discovered in previous research, such as effective communicative strategies in feedback for students to know (a) the elements of good performance in a particular assessment are, (b) how students have performed in the assessment, and (c) what is needed by the student to bridge (a) and (b) (Sadler, 1989). The implementation of these models relies on students being conversant in academic subject discipline language, the language of assessment, and assessment literacy (Deelay and Bovill, 2017).
These models were coupled with an increased interest in staff and students working as partners in learning and teaching. In a staff-student partnership, there may be concerns regarding the meaningful participation of students as partners for assessment due to the validatory nature of assessment associated with the domain of the teacher; however, opportunities to assess what students have learned is an opportunity to design assessment for learning (Cook-Sather et al., 2014). The study explores student participation with academic staff in assessment processes to co-create different assessment and feedback elements (Deelay and Bovill, 2017).
To investigate staff-student partnerships student perspectives about learning during staff-student partnerships were explored. These partnerships entailed engaged students as co-designers within assessment and feedback processes (Deelay and Bovill, 2017). They were undergraduate MA Social Sciences students from a Scottish university. The solution involved students studying two optional Public Policy honours courses taught in the first and second semesters of 2013/14. The classroom environment consisted of a mixed classroom of third- and fourth-year students and international students visiting for one semester. The courses were assessed by a 3000-word essay, having a weightage of 40% of the overall grade and a two-hour examination weighted 60%.
The areas of assessment and feedback where a partnership approach was adopted in the first semester are as follows: (1) staff-student co-creation of students’ essay titles; (2) staff-student co-creation of essay marking criteria; (3) students’ formative self-assessment of their essays, using co-designed marking criteria that later compared with teacher’s feedback on essays; (4) staff-student co-creation of formative and summative examination marking criteria; and (5) student peer review of formative examination and agreed co-designed marking criteria (Deelay and Bovill, 2017).
Staff-student partnerships in the second semester consisted of a typed summative examination followed by online feedback on their final examination using a previously agreed marking criteria consistent with the format of the formative examination feedback. It should be appreciated regarding summative examination feedback that it was not a usual practice at the Scottish university as marked examination papers were previously not returned to students (Deelay and Bovill, 2017).
The research findings showed successful staff-student partnership. For some students, it was an unfamiliar approach and came across as a curious arrangement. These courses were optional, and it was made clear to the students that if they did not feel comfortable with the assessment methods, they could opt for a different course (Deelay and Bovill, 2017). The research has a great influence on the flourishing concept of enhancing assessment literacy essential to all participants of assessment practice.
More frequent staff-student partnerships in the educational community in which students are contributing partners would challenge conventional pedagogy and assumptions of learning and building a body of work to ensure that assessment is a powerful process for learning not just for testing accumulated knowledge. Deelay and Bovill’s (2017) research concerned a very specific and small case study to attempt to engage students actively and meaningfully with their learning through assessment. Prolonged staff-student partnerships would enable and enhance democratic classroom practice. Co-creating marking criteria followed by students formatively self-assessing their essays enabled students to compare their assessment with the teacher’s assessment facilitating them to understand the assessment requirements fully. Deelay and Bovill (2017) believed that this was more efficient than merely asking students to self-assess their work because it allowed them to scale their ability against criteria, they were familiar with, and calibrate their self-assessment with their peers and teacher’s assessment.
However, staff-student partnerships may not be appropriate in all learning situations, and it may be outside the comfort zone for some students and teachers. The research concludes by offering various practical recommendations that include: (1) starting small, designing partnership in one assessment or one element of assessment; (2) ensuring partnership is voluntary and alternatives are available for students who do not wish to take part; and (3) thinking proactively about ensuring inclusive approaches for all participating students (Deelay and Bovill, 2017). Staff-student partnerships to enhance students’ assessment literacy through democratic practice are innovative as they facilitate intrinsic motivation, active engagement, and deeper learning approaches.
[Source] Deeley, S. J., & Bovill, C. (2017). Staff student partnership in assessment: enhancing assessment literacy through democratic practices. Assessment & Evaluation in Higher Education, 42(3), 463-477.
 Rust, C., B. O’Donovan, and M. Price. 2005. “A Social Constructivist Assessment Process Model: How the Research Literature Shows us This Could be Best Practice.” Assessment & Evaluation in Higher Education 30 (3): 231–240.