Do history students understand how their work is marked?

By Student Voice Analytics
marking criteriahistory

Mostly not consistently. Across the marking criteria category in National Student Survey (NSS) open‑text data, 87.9% of comments are Negative with a sentiment index of −44.6, signalling sector‑wide dissatisfaction with how criteria are presented and applied. In history, which uses the Common Aggregation Hierarchy framing common across UK HE for subject benchmarking, students are broadly positive about their programmes but echo the same assessment pressures: the Marking criteria topic sits at −46.8 with a 3.7% share of comments. This article analyses history students’ concerns and sets out practical steps programmes use to make expectations explicit, calibrate markers, and improve feedback quality.

History students often express concerns through surveys about inconsistencies, opaque feedback, and variation between tutors. Starting the assessment process with unambiguous criteria matters for fairness and for learning. Because markers come from diverse academic backgrounds, programmes should articulate the analytical and evidential requirements early, and show how these map to grade bands. Engaging with the student voice through text analysis helps staff refine criteria and teaching approaches.

How do history courses define and apply marking criteria?

Historical assessments hinge on interpretation, source analysis and argumentation. Criteria should therefore foreground what constitutes robust historical method and how that is rewarded in grading. Provide annotated exemplars at key grade bands, and use checklist‑style rubrics with weightings and common error notes. Release criteria with the assessment brief and walk students through them in class or online. Short, structured “feed‑forward” sessions before deadlines help students test their approach against the rubric. These steps reduce ambiguity while still valuing disciplinary judgement.

What concerns do history students raise most often?

Students report uneven application of criteria and feedback that does not map to the stated standards. Variation between markers undermines confidence and perceptions of fairness. Programmes can reduce this by running marker calibration using a shared sample bank and by publishing short “what we agreed” notes to students. Requiring assessors to reference rubric lines in feedback (“how your work was judged”) turns feedback into an improvement tool rather than a verdict.

How do course disruptions affect assessment?

Disruptions such as industrial action or structural changes intensify uncertainty about expectations. In history datasets, Strike Action appears in 4.6% of comments and is very negative. When timetabling or assessment formats shift, programmes should state “what changed and why,” adjust criteria only where necessary, and provide worked examples of any reweighted components. A single, consistently updated source of truth on the VLE limits confusion.

How can students navigate inconsistent feedback?

Students benefit from requesting clarification meetings that link comments to specific rubric descriptors and exemplars. Staff should provide targeted, criterion‑referenced comments that distinguish between argument quality, evidence handling, and structure. Where marks diverge, second marking or moderation notes that explain the final judgement improve transparency and student trust.

What does effective communication about criteria look like?

Explain criteria in plain English and show how each element contributes to marks. Hold short Q&A sessions after releasing briefs and follow up with succinct FAQs that address recurring queries. With each returned grade, provide a brief summary referencing rubric lines and priority actions for the next assignment. Standardise criteria across modules where learning outcomes overlap and flag intentional differences ahead of time.

How should programmes uphold fair assessment practices?

Make the route to review and appeal visible and time‑bound. Begin with a discussion anchored in the rubric and exemplars; escalate to independent review if concerns remain. Provide moderation statements at cohort level so students see how consistency was assured. Train markers together and revisit calibration periodically, especially on high‑volume modules.

What does a fair, transparent assessment process require?

Focus on consistent criteria, shared exemplars, and visible calibration. Close the loop on feedback by aligning comments to the rubric and by offering short feed‑forward opportunities. Use student voice analysis to prioritise refinements, and ensure policies for review and appeal are accessible and respected. The aim is for every student to recognise how their work was evaluated and how to progress in the next task.

How Student Voice Analytics helps you

Student Voice Analytics surfaces where and why students struggle with marking criteria in history. It tracks sentiment over time by cohort, mode and site, and benchmarks against other CAH areas so you can target the parts of the programme where tone is most negative. Teams can export concise, anonymised summaries for modules and boards, compare like‑for‑like across demographics, and evidence progress with clear, year‑on‑year movement. The result is faster prioritisation, sharper calibration, and more useful feedback.

Request a walkthrough

Book a Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and governance packs designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready governance packs.
  • Benchmarks and BI-ready exports for boards and Senate.

More posts on marking criteria:

More posts on history student views: