Do human geography students find marking criteria usable?

Updated Mar 16, 2026

marking criteriahuman geography

When marking criteria feel vague, human geography students are left guessing what a strong submission actually looks like. In National Student Survey (NSS) open-text analysis of the marking criteria theme across UK higher education, 87.9% of 13,329 comments are negative; within human geography, students contribute 3,159 comments overall and mention marking criteria in 3.7% of remarks, with a sentiment index of -47.3. Across the sector, this theme captures concerns about clarity and consistency. Within the subject area, students repeatedly ask for criteria, exemplars and marking practices they can use before submission, not decode afterwards.

What is distinctive about human geography for assessment?

Human geography blends social science reasoning with spatial and methodological rigour, so criteria need to reward conceptual synthesis alongside technique. Students often rate fieldwork, trips and staff support highly in this subject, as shown in student views on fieldwork and placements in human geography, but the criteria still need to show how theory, evidence and applied enquiry will be judged. When that balance is explicit, students can plan stronger work and trust the standards they are being asked to meet.

How should criteria balance qualitative and quantitative work?

Balance qualitative interpretation and argumentation against quantitative accuracy and method so students can see how each element contributes to the grade. Use checklist-style rubrics with descriptors for both strands, and state weightings for each element. For mixed-methods tasks, publish annotated exemplars at key bands to show what good looks like across different forms of evidence. That makes the scheme usable while students are planning, not just after marks are released.

What blend of subjective and objective judgement works best?

Combine structured rubric lines, which provide objective anchors, with space for professional judgement on synthesis, originality and use of evidence. Run marker calibration on a small bank of shared samples and publish brief "what we agreed" notes so students can see how discretion operates within the rubric. This keeps room for academic judgement without making marking feel arbitrary.

Where do student expectations diverge from practice?

Students expect criteria that map directly to the assessment brief, transparent weighting for fieldwork versus write-up, and consistent marking across modules. Too often they meet opaque descriptors or different rules for similar tasks. Release criteria with the assessment brief, signpost any intentional differences across modules, and offer a short walk-through or Q&A so cohorts can test their understanding before submission. The result is fewer avoidable surprises and more confidence in the fairness of the process.

What feedback practice helps students use criteria?

Provide a concise "how your work was judged" summary tied to rubric lines, with one or two actionable feed-forward points per criterion. Align comments to the language of the marking scheme so students can self-assess against it in future tasks. Where dissertations and research projects are involved, set and meet a realistic feedback service level, informed by what human geography students say about usable feedback, and reference the criteria explicitly in supervision notes. That turns criteria into a live learning tool, not a document students only revisit after disappointment.

How do criteria shape learning outcomes?

Criteria direct attention and effort. When students understand how learning outcomes translate into standards and weightings, they plan data collection, analysis and argument more effectively. Standardise criteria where outcomes overlap, and make any departures explicit so students can transfer learning between modules without second-guessing the rules. Clear criteria improve not just marking, but the quality of preparation before submission.

What should programmes change now?

  • Publish annotated exemplars for each assessment type and grade band.
  • Use checklist rubrics with unambiguous descriptors, visible weightings and common error notes.
  • Calibrate markers regularly and share a short "what we agreed" summary with the cohort.
  • Release criteria with the assessment brief, then hold a short in-class or online walk-through.
  • Provide a brief criteria-referenced summary with each returned grade, plus an opportunity for feed-forward discussion.
  • Standardise criteria across modules where outcomes match, and highlight intentional differences up front.
  • For fieldwork-heavy tasks and dissertations, spell out how data quality, analysis and reflective components are weighted.

How Student Voice Analytics helps you

Student Voice Analytics shows where marking criteria create friction across programmes and within human geography, with consistent comparisons by cohort, study mode and domicile. You can track sentiment over time, drill from provider to school, department or programme, and export concise, anonymised briefs for boards and teaching teams. That makes it easier to identify where criteria are unclear, test whether rubric changes improve sentiment, and show students that feedback has led to action.

Explore Student Voice Analytics to see which cohorts are struggling with marking criteria, and whether your changes are improving confidence.

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.