Do human geography students get feedback that drives improvement?

By Student Voice Analytics
feedbackhuman geography

Mostly not reliably. Across the National Student Survey (NSS), the Feedback category shows 57.3% negative sentiment (index −10.2), signalling sector-wide issues with timeliness, usefulness and clarity. Within human geography, feedback features in ≈6.7% of comments and trends negative (≈ −24.7), with marking criteria particularly problematic (≈ −47.3). This sits alongside pronounced strengths in fieldwork and trips (≈ +42.7), so the discipline’s experiential learning is not the issue; how we scaffold and communicate assessment expectations is.

Feedback stands as a cornerstone in the academic development of human geography students in UK higher education. It must be specific, actionable and transparent to aid learning across urban planning, environmental sustainability and other subfields. Using student voice sources, including NSS open-text and programme-level surveys, staff can analyse patterns and prioritise changes that support students’ progression.

What makes feedback challenging in human geography?

The interdisciplinary nature of human geography means criteria and conventions vary across methods and modules. Students move between statistical analysis and qualitative ethnography, GIS mapping and sociocultural critique. They need feedback that maps to the assessment brief and marking criteria for each method, while also connecting to programme outcomes. Younger and full-time cohorts often react more negatively to feedback than mature and part-time peers, so modules with large first- and second-year cohorts benefit from consistent turnaround times, feed-forward and short guides on using feedback.

How transparent is the marking system?

Ambiguity in marking criteria erodes trust and slows improvement. Providers should publish accessible criteria with annotated exemplars that show work at different grade bands, and explain application during induction to each module. Set a visible feedback service level by assessment type and monitor on-time rates. Schedule brief Q&A or office-hour drop-ins near submission and return dates so students can test their plans and interpretation of comments. Calibrate across markers at the start of each diet and spot-check feedback for specificity, actionability and alignment to criteria.

Which is more effective: formative or summative feedback?

Formative feedback drives learning when it arrives early enough to change the next attempt. Build staged tasks with short, structured feed-forward that tells students what to do next and where to look in the marking criteria. Summative feedback still matters for progression, but students make best use of it when comments flag transferable actions for the next module. Close the loop in class by showing how previous themes in feedback have shaped current assessment design.

How should feedback differ for quantitative and qualitative work?

For quantitative assignments, target methodological accuracy, data quality and interpretation, and reference the statistical or spatial techniques named in the assessment brief. For qualitative work, focus on argumentation, theoretical framing, coherence and evidence use, and show how to deepen analysis with sources or data. In both cases, signpost students to exemplars and marking criteria so they can see how comments change their next submission.

What practices make feedback actionable?

  • Use concise rubrics with space for criterion-linked comments and a short feed-forward section.
  • Provide annotated exemplars and short “how to use your feedback” notes in the VLE, ideally within each module.
  • Build dialogic elements: quick debriefs after fieldwork, small-group reviews where students apply criteria to samples, and targeted tutorials for complex methods like GIS.
  • Employ light-touch text analysis to identify recurring issues in a cohort and address them in workshops or guidance.

What do students suggest to improve feedback?

Students ask for detailed rubrics, interactive feedback sessions and consistent marking across assessors. Standardise evaluation through short calibration sprints and shared marking of samples. Publish “you said → we did” updates on feedback formats and turnaround to show progress. Maintain consistency across modules while allowing space for individual analytical expression.

What should we take forward?

Prioritise clarity and timeliness: criteria students can use, annotated exemplars, structured feed-forward and predictable turnaround. Calibrate teams regularly and adopt working practices from part-time and mature provision where dialogic feedback and staged tasks are common. Human geography’s fieldwork strength shows that well-briefed, well-debriefed learning environments deliver; feedback and assessment design should match that standard.

How Student Voice Analytics helps you

Student Voice Analytics turns NSS open-text and internal surveys into trackable metrics for feedback and human geography. It lets you drill from provider to programme, compare CAH areas and cohorts, and surface where tone is weakest so you can prioritise modules and markers. Export concise, anonymised summaries for boards and teaching teams, monitor on-time feedback performance, and evidence changes with like-for-like comparisons over time.

Book a Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and governance packs designed for OfS quality and standards and NSS requirements.

More posts on feedback:

More posts on human geography student views: