Are assessment methods holding back literature in English students?
By Student Voice Analytics
assessment methodsliterature in EnglishYes. Student comments on assessment methods in the National Student Survey (NSS, the UK-wide survey of teaching and student experience) skew negative overall, with 28.0% positive and 66.2% negative (sentiment index −18.8), so literature in English programmes prioritise transparent design, consistent marking, and flexibility to sustain engagement. The category summarises how students experience formats and marking across the sector; the CAH coding groups discipline-level feedback across providers. The current discipline extract has no topic breakdown, so we apply these sector patterns with attention to cohort differences: mature (−23.9) and part-time learners (−24.6) tend to report more negative experiences than their peers. This lens shapes the discussion that follows on examinations, coursework, feedback, digital delivery, preparation, and wellbeing.
When it comes to the experiences of literature in English students with different types of assessments, such as exams, coursework, and the feedback they receive, listening to student voice helps uncover challenges and desired improvements. As students navigate their programmes, assessment design and communication shape learning and overall academic experience. We explore how traditional and modern methods affect engagement and outcomes, and how student perspectives refine assessment strategies, the role of text analysis in coursework, and survey evidence. Engaging students and staff to gather a broad perspective, we highlight areas where changes substantively enhance the learning journey.
How should we navigate examination methods?
For modules relying on exams, methods and criteria need to be explicit and consistent. Exams, whether they are 6-hour exams, final examinations, or timed essays, set specific challenges; preparation demands deep subject knowledge and mastery of exam technique and marking criteria. Publishing a short assessment method brief for each task (purpose, marking approach, weighting, allowed resources, common pitfalls) and using checklist-style rubrics improves confidence and fairness. Quick marker calibration using anonymised exemplars at grade boundaries, with recorded moderation notes, reduces variance across markers. Regular conversations about expectations demystify the process and improve students’ performance.
How do students balance coursework and assignments?
Students face pressure from word counts and clustered deadlines while synthesising large bodies of literature. Programme teams can coordinate at programme level by publishing a single assessment calendar to avoid deadline pile-ups and by balancing methods across modules in line with learning outcomes. Structuring assignments to encourage original thinking and minimise plagiarism, providing annotated exemplars, and offering constructive formative feedback enable students to demonstrate analysis rather than just meet a brief. Plain-language instructions and predictable submission windows particularly help diverse cohorts.
How does guidance and feedback drive growth?
Students use feedback best when criteria and feedforward are concrete. Clear assessment briefs and checklist rubrics anchor expectations; detailed commentary on text analysis shows where work aligns with outcomes and how to improve. Make the feedback loop two-way: invite dialogue so students can query comments and plan next steps. Provide a brief post-assessment debrief to the cohort summarising common strengths and issues even before individual marks return; this quick transparency improves perceived fairness and supports skill development in subsequent assignments.
What are the dilemmas in digital assessment?
Online assessment adds efficiency but can feel impersonal and raise concerns about technical reliability and perceived fairness. Build accessibility in from the start with alternative formats, captioned/oral options, and plain-language instructions. Offer rapid technical support during submission windows and state expectations and academic integrity requirements upfront. Short orientation on assessment formats and referencing conventions, with mini-practice tasks, reduces anxiety for students unfamiliar with UK norms, including not UK domiciled learners.
What solves the exam preparation puzzle?
Targeted support lifts outcomes when aligned with assessment design. Revision classes should map directly to exam formats, with exemplars and marking criteria to focus effort. Timely return of exam scripts allows students to analyse performance and adjust study strategies. Early release of assessment briefs and predictable windows help students plan, especially those balancing work or caring responsibilities. Calibrated marking and transparent moderation reinforce trust in outcomes.
How does assessment affect mental health?
Assessment can heighten stress and anxiety, especially when expectations are ambiguous. Transparent briefs, reliable turnaround times, and a balanced mix of methods reduce uncertainty. For mature and part-time learners, predictable submission windows and asynchronous alternatives for oral components improve manageability. Embedding wellbeing signposting in assessment communications and inviting students to discuss feedback supports confidence and reduces pressure without diluting academic standards.
What holistic reforms matter now?
Student insights suggest assessment benefits from programme-level coordination, unambiguous briefs, calibrated marking, and rapid cohort debriefs. With NSS patterns showing a critical tone in assessment methods (28.0% positive, 66.2% negative, sentiment index −18.8) and more negative experiences among mature (−23.9) and part-time (−24.6) learners, reforms that emphasise clarity, parity, and flexibility offer the greatest impact. Literature in English can adopt these sector-tested practices while retaining disciplinary depth in textual analysis and argumentation.
How Student Voice Analytics helps you
Student Voice Analytics surfaces the assessment issues that matter for literature in English by cutting open-text data by discipline (CAH), cohort and demographics, and tracking sentiment over time. You can benchmark assessment methods against the sector, pinpoint where negative tone concentrates (e.g., mode or age), and brief programme teams with concise, anonymised summaries. Like-for-like comparisons by subject mix and cohort profile, plus export-ready tables and dashboards, support boards, TEF and quality reviews, and demonstrate progress year on year.
Request a walkthrough
Book a Student Voice Analytics demo
See all-comment coverage, sector benchmarks, and governance packs designed for OfS quality and NSS requirements.
-
All-comment coverage with HE-tuned taxonomy and sentiment.
-
Versioned outputs with TEF-ready governance packs.
-
Benchmarks and BI-ready exports for boards and Senate.
More posts on assessment methods:
More posts on literature in English student views: