Published Jun 16, 2024 · Updated Mar 08, 2026
marking criteriaEnglish studies (non-specific)When students cannot see how interpretation turns into marks, confidence in the process drops quickly. In English Studies, where originality and judgement matter, vague rubrics can make marking feel personal rather than fair.
Across the UK, National Student Survey (NSS) open-text comments grouped within our undergraduate student comment themes and categories show that marking criteria is heavily negative (87.9% negative; sentiment index -44.6). For English studies (non-specific), the current discipline extract does not include topic rows, so sector evidence is the clearest guide to where English Studies teams should act first.
With 72.7% of comments coming from younger students and 75.8% from full-time study, the pressure points align with typical English Studies cohorts. In related subject areas, Law records -47.2, which reinforces the same risk: interpretive marking drifts when criteria, exemplars and calibration are weak.
The practical response is straightforward: make criteria visible, show what quality looks like, and calibrate markers so students can see how judgements are reached.
How should English Studies frame assessment from the outset?
Students need to know, before they start drafting, what strong work looks like. In English Studies, assessment criteria should be precise enough to cover literature, linguistics and creative writing without flattening the distinct demands of each task.
That means balancing creativity with textual analysis and rewarding argument quality alongside originality. Sector feedback suggests students expect criteria with the brief, clear weightings and annotated exemplars at key grade bands, a pattern echoed in the feedback issues English Studies students raise most often. Short marker calibration sessions on shared samples, followed by published "what we agreed" notes, help staff apply those criteria consistently. When grades are returned, a brief "how your work was judged" summary tied to rubric lines closes the loop and reduces avoidable challenge.
How does a diverse curriculum reshape assessment in English Studies?
Curriculum breadth is a strength, but only if assessment differences are signposted early. As programmes expand from classical literature to contemporary theory and multimodal work, teams benefit from standardising criteria wherever learning outcomes overlap and explaining any intentional module-level differences up front.
A module on multimedia narratives may weight structure and multimodal analysis, while a poetry module may weight prosody and intertextual reading. Releasing criteria with the brief and offering a short Q&A in class or online gives students a clearer route to success and reduces the misunderstandings that often drive negative NSS comments about criteria.
Which assessment methods pose the greatest challenges?
Different assessment types need different protections against subjectivity, a pattern also visible across assessment methods in English studies. Essays test analysis and evidence use, yet invite inconsistent judgements if rubrics do not define what counts as a strong claim, a well-used source or a sophisticated interpretation.
Portfolios need consistent aggregation across diverse artefacts, so checklists with weightings and common error notes help markers and students alike. Presentations and creative pieces work better when criteria separate idea quality, technique and delivery. A short feedback and feedforward approach before submission, backed by marker calibration on a shared sample bank, reduces confusion before it becomes complaint.
How do we develop critical analysis and interpretive skills and assess them consistently?
Criterion-referenced formative feedback helps students improve faster because it turns abstract advice into a concrete next step. The strongest comments point to the specific rubric line a student has not yet met and explain what stronger evidence, structure or reading practice would look like.
Because interpretation is plural, teams should agree baseline expectations for evidence, argumentative structure and use of secondary sources, then assess against those standards. Annotated exemplars that show different routes to a high grade make the key point visible: criteria reward method and justification, not agreement with a single reading.
How can we assess creative writing without losing academic rigour?
Creative writing can stay individual without becoming opaque. The clearest assessment models separate concept, craft and reflection, with criteria that weight originality, control of form and an accompanying commentary that evidences process and literary context.
Sampling moderation and short calibration sessions reduce drift in judgements about voice or resonance. Student surveys repeatedly ask for explicit descriptors and examples, so providing both, alongside short notes on common pitfalls, makes creative assessment feel fairer and more developmental.
What does intertextuality mean for marking?
Clear expectations make intertextuality easier to assess and easier to teach. Marking should specify what counts as effective intertextual analysis: identifying connections, explaining their significance and integrating them into a coherent argument.
Teams can support consistency with a simple shared scale for accuracy, relevance and integration, then pair it with exemplars that show what strong practice looks like. Talking through these expectations with cohorts sharpens focus and reduces disputes about subjectivity.
How do we balance breadth and depth in assessment?
Well-designed briefs let students show range without rewarding superficial coverage. Survey essays or scene analyses can demonstrate breadth, while focused case studies or extended commentaries invite depth.
Rubrics should reward both coverage and intensive engagement, so students know they are being judged on the quality of thinking rather than the ambition of the topic alone. Offering choice within briefs helps students specialise without losing comparability, because the criteria, not the chosen text, carry the fairness.
Which support systems and resources help students meet criteria?
Support works best when it reinforces the rubric rather than sitting alongside it. Writing centres, peer mentoring and library guidance are more useful when they decode the assessment brief, reference the same criteria and surface recurring sticking points.
Exemplar banks on the VLE, short FAQs and targeted clinics help students translate criteria into practical decisions before submission. When marks are returned, a concise "how your work was judged" note mapped to the rubric gives students a clearer next step and strengthens confidence in the process.
How Student Voice Analytics helps you
Explore Student Voice Analytics if you want evidence for where to standardise criteria first and how to show improvement over time.
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.