Do English Studies students understand how their work is marked?

Published Jun 16, 2024 · Updated Oct 12, 2025

marking criteriaEnglish studies (non-specific)

Mostly not. Across the UK, National Student Survey (NSS) open‑text comments grouped under marking criteria are heavily negative (87.9% Negative; sentiment index −44.6). For English studies (non‑specific), this discipline lens in the current extract does not include topic rows, so we apply sector evidence to modules where interpretation and creativity dominate. With 72.7% of comments from younger students and 75.8% from full‑time study, the pressure points mirror typical English Studies cohorts; in related subject areas, Law registers −47.2, underlining how interpretive marking drifts without consistent calibration. The practical response is to make criteria visible, exemplified and calibrated so students can see how judgements are reached.

How should English Studies frame assessment from the outset?

In English Studies, precise and usable assessment criteria ensure that the breadth of student skills is evaluated fairly. This discipline spans literature, linguistics and creative writing, so marking must balance creativity with textual analysis and reward argument quality as well as originality. Student voice matters because it surfaces where criteria feel opaque or inconsistently applied; sector data above indicates students expect criteria to be available with the assessment brief, to include weightings, and to be supported by annotated exemplars at grade bands. Staff should run short marker calibration exercises on shared samples and publish “what we agreed” notes, then provide a short “how your work was judged” summary when returning grades. These steps align criteria with learning outcomes and reduce avoidable challenge.

How does a diverse curriculum reshape assessment in English Studies?

Diversifying from classical literature to contemporary theory enriches study but complicates marking. Programmes benefit from standardising criteria across modules where learning outcomes overlap and signposting any intentional differences up front. A module on multimedia narratives may weight structure and multimodal analysis; a poetry module may weight prosody and intertextual reading. Releasing criteria with the brief and offering a short Q&A in class or online improves transparency and reduces misunderstandings that often drive negative NSS remarks about criteria.

Which assessment methods pose the greatest challenges?

Essays, portfolios, presentations and exams each surface different risks. Essays test analysis and evidence use yet invite subjective judgements if rubrics lack unambiguous descriptors. Portfolios require consistent aggregation of diverse artefacts; checklists with weightings and common error notes help. Presentations and creative pieces need criteria that separate idea quality, technique and delivery. A short feed‑forward clinic before submission windows enables students to test their understanding of the rubric against exemplars, while marker calibration on a bank of samples limits variance.

How do we develop critical analysis and interpretive skills and assess them consistently?

Students progress fastest when formative feedback references the specific rubric lines they are not yet meeting, with a brief explanation of what stronger evidence or reading practice would look like. Because interpretation is plural, teams should agree baseline expectations for evidence, argumentative structure and use of secondary sources, then assess against those standards. Publishing annotated exemplars that demonstrate different interpretive routes to a high grade helps students see that criteria reward method and justification, not a single reading.

How can we assess creative writing without losing academic rigour?

Creative writing assessment should separate concept, craft and reflection. Criteria can weight originality, control of form and an accompanying commentary that evidences process and literary context. Sampling moderation and short calibration sessions reduce drift in judgements of voice or resonance. Student surveys routinely ask for explicit descriptors and examples; providing these, plus brief notes on common pitfalls, supports fair and developmental marking while respecting individuality.

What does intertextuality mean for marking?

Intertextuality demonstrates how students situate texts within traditions and debates. Marking should specify what counts as effective intertextual analysis: identifying connections, explaining their significance and integrating them into an argument. Teams can agree a simple scale for accuracy, relevance and integration, then share exemplars that show strong practice. Discussing these expectations with cohorts sharpens focus and reduces disputes about subjectivity.

How do we balance breadth and depth in assessment?

Assessment can signal breadth through survey essays or scene analyses while inviting depth via focused case studies or extended commentaries. Rubrics should reward both coverage and intensive engagement. Offering choice within briefs lets students specialise without losing comparability; the criteria, not the chosen text, do the heavy lifting for fairness.

Which support systems and resources help students meet criteria?

Writing centres, peer mentoring and library guidance all contribute to student success when aligned with the rubric. Clinics that decode assessment briefs, exemplar banks on the VLE, and short FAQs that track recurring queries help close the loop. When returning marks, a concise “how your work was judged” note mapped to criteria allows students to act on feedback and improves confidence in the process.

How Student Voice Analytics helps you

  • Track student sentiment on marking criteria over time, from provider to school, department and programme, with drill‑downs by cohort, site and mode.
  • Compare like‑for‑like across the English Studies CAH area and demographics to pinpoint where tone is most negative and where calibration will have the greatest impact.
  • Export concise, anonymised summaries for programme teams and boards, with ready‑to‑use tables and year‑on‑year movement for action planning.
  • Evidence improvement by linking targeted changes in criteria, exemplars and calibration to shifts in NSS open‑text tone for relevant cohorts.

Request a walkthrough

Book a Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and governance packs designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready governance packs.
  • Benchmarks and BI-ready exports for boards and Senate.

More posts on marking criteria:

More posts on English studies (non-specific) student views: