Do management studies students find marking criteria fair and usable?

Updated Mar 28, 2026

marking criteriamanagement studies

Mostly no. Management studies students describe marking criteria as unclear, inconsistently applied, and harder to use than they should be.

Across the National Student Survey (NSS), student comments on marking criteria are strongly negative, with 87.9% negative and a sentiment index of -44.6. Within management studies, the specific topic of marking criteria is even more challenging at -48.4, despite the subject's overall balance remaining positive at 53.0% positive. This category captures how students experience rubric clarity and consistency across UK provision, and sits within the wider undergraduate student comment themes and categories framework, while the CAH grouping for management studies is used sector-wide for benchmarking across business and management programmes. The implication is practical: if criteria feel unclear or uneven, students question both fairness and the value of feedback.

Management studies combines strategic thinking with practical application, so assessment needs to reflect both. Students benefit from marking criteria that guide their work and show how performance maps to real business contexts. Bringing student voice into surveys and criteria design helps keep assessment relevant and fair. Text analysis of survey feedback then helps institutions refine criteria so they remain rigorous and understandable.

How can marking criteria be transparent and predictable?

Ambiguity undermines trust in marking. Students value criteria that make expectations and grade standards predictable. When descriptors are vague or applied inconsistently, motivation drops and dissatisfaction rises. In response, programme teams increasingly co-design criteria with students, publish annotated exemplars at key grade bands, and use checklist-style rubrics with weightings and common error notes. Releasing criteria with the assessment brief and running a short walk-through improves understanding, while marker calibration using a shared sample set helps align standards. Adding a brief "how your work was judged" summary alongside grades shows how markers interpreted the rubric and gives students clearer feed-forward.

Do criteria capture real-world relevance for management students?

Assessment gains credibility when it reflects genuine business practice. Criteria should test theoretical command and the application of theory to realistic scenarios. Case work, live projects, and simulations allow students to demonstrate strategic thinking and practical problem-solving, if criteria spell out how applied judgement, data use, and teamwork are assessed. Balancing these authentic tasks with theory-focused components supports a rounded education. Consulting students during criteria design keeps objectives close to professional expectations and strengthens confidence in fairness.

What feedback helps students use criteria to improve?

Actionable, timely feedback mapped directly to rubric descriptors has the greatest impact. Students want to see which parts of the rubric they met and what to do next, a pattern that also appears in feedback across business and management studies. Aligning feedback to future tasks, and keeping turnaround close enough for students to use it, improves attainment and credibility. With younger students making up 72.7% of comments in this category, early induction to criteria and exemplars in first-year modules helps set expectations that carry across the programme. When feedback feels disconnected from criteria, students treat it as arbitrary.

How can we secure consistency across modules and markers?

Variability between modules creates confusion and perceptions of unfairness. Programme teams can standardise criteria where learning outcomes overlap and explicitly signal any intentional differences. Regular calibration using a bank of shared samples, followed by brief "what we agreed" notes for students, reinforces reliability. Text analysis of student feedback helps identify problematic descriptors and drift in interpretation, so teams can revise rubrics and target staff development sooner.

How should group work evidence individual contributions?

Group projects mirror workplace practice, but they raise concerns about fairness if individual effort is obscured. Criteria can combine assessment of the collective output with evidence of individual contribution, following group work assessment best practice. Structured roles, milestone artefacts, and light-touch peer evaluation provide usable evidence without excessive process. Rubrics that separate teamwork behaviours from disciplinary outcomes protect individual accountability while still rewarding collaboration.

How can varied assessments stay aligned to criteria?

Diverse assessment methods showcase different competencies, from analysis to implementation, a broader issue explored in assessment methods in management studies. Reflective logs and portfolios can surface learning progression, while presentations and simulations test applied decision-making. As formats diversify, criteria must evolve in parallel so markers assess the intended construct, not presentation polish or tacit expectations. Publishing task-specific exemplars, clarifying weightings, and using unambiguous descriptors helps students target effort appropriately.

What practical improvements should programmes prioritise now?

Focus on assessment clarity first. Publish annotated exemplars and checklist-style rubrics with weightings. Release criteria with the assessment brief and run a short Q&A. Calibrate markers using shared samples, then share "what we agreed" notes. Add a "how your work was judged" summary to returned grades. Standardise criteria across modules where outcomes overlap, and explain any differences up front. Offer brief feed-forward clinics before major submissions. Track recurring student questions about criteria and close the loop on changes made.

How Student Voice Analytics helps you

Student Voice Analytics shows where criteria and feedback are driving sentiment in management studies, so teams can act on the issues students feel most strongly. You can track movement in the marking criteria theme over time, compare patterns across management studies cohorts, sites, and modes, and drill from school to programme to module. Concise, anonymised summaries and representative comments support calibration sessions and assessment design reviews. Like-for-like comparisons across CAH areas and demographics help you prioritise interventions that shift tone, while export-ready outputs make it easier to brief boards and external partners.

Explore Student Voice Analytics to see where unclear criteria, inconsistent marking, or weak feedback processes are creating the most friction in management studies.

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

More posts on marking criteria:

More posts on management studies student views:

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.