Are economics students getting clarity and consistency in marking?

Updated Mar 20, 2026

marking criteriaeconomics

Economics students lose confidence quickly when two similar pieces of work receive different grades and nobody can explain why. NSS (National Student Survey) open-text feedback on marking criteria, analysed using our NSS open-text analysis methodology, shows this is not a marginal issue: 87.9% of comments are negative (index −44.6) across about 13,329 comments.

Within economics, the tone is even more negative (−48.1), and criteria-related feedback accounts for 9.8% of the discipline conversation. The practical response is straightforward: publish unambiguous rubrics and annotated exemplars, release criteria with the brief, and calibrate markers so judgement stays consistent across modules.

Exam and Coursework Guidance: what do students need to see?

Exam and coursework guidance in economics varies widely. Some modules define expectations with detailed rubrics and exemplars, and students usually respond better when standards are explicit. Others leave too much open to interpretation, especially before exams, which turns preparation into guesswork and makes grading feel less fair. Clear criteria help students direct their effort toward the analytical quality, evidence use, and structure markers actually reward.

Economics departments should publish checklist-style rubrics with weightings and common error notes, align them to learning outcomes, and provide annotated exemplars at key grade bands. Release criteria with the assessment brief, offer a short walk-through or Q&A, and run a feed-forward clinic before submission windows in high-volume modules. Plain language keeps the playing field more level and reduces avoidable anxiety.

What challenges do economics students face in grading?

Many economics students face criteria that are so specific they become hard to decode. That difficulty is amplified by the perception that marks above 70 are rare, which can make grading feel stringent or arbitrary even when standards are academically defensible. Students need to know what distinguishes competent work from excellent work, or sustained effort begins to feel disconnected from outcomes.

Programme teams should communicate what each grade bracket demands and provide illustrative examples of high-quality work, echoing wider evidence on economics students' views on assessment methods. Marker calibration using a short bank of shared samples, with brief notes on what meets versus exceeds the criteria, reduces noise about fairness and consistency. The benefit is twofold: students know what to aim for, and staff can defend decisions with greater confidence.

How timely and useful is feedback in economics?

Timely, criterion-referenced feedback improves the next assignment, not just the last one. When feedback arrives while the material is still fresh, students can act on it; when it arrives late or stays vague, learning slows and motivation drops. Relevance matters as much as speed, because students need comments that show which rubric lines drove the judgement and what to change next.

Agree a realistic feedback turnaround standard at programme level and share it with cohorts. Return a short "how your work was judged" summary with grades, link comments to marking criteria, and make feed-forward routine rather than exceptional, which aligns with what economics students say they need from feedback. That turns feedback into a practical tool for improvement.

Are group assignments marked consistently?

Group work feels fair only when students can see how the criteria apply to both the shared output and their own contribution. If markers interpret the same criteria differently, uncertainty grows and students start managing the process strategically rather than focusing on the learning.

Standardise criteria across modules where learning outcomes overlap and explain any intentional differences up front. Calibrate markers on group assessment exemplars, drawing on group work assessment best practice, publish a simple "what we agreed" note, and review criteria with student representatives. These steps improve transparency, reduce avoidable disputes, and help students collaborate with clearer expectations.

What happens when marking is late or inconsistent?

Late results leave students waiting anxiously and compress the time available to prepare for the next assessment. Inconsistent grading across markers or assignments makes it harder to understand what good performance looks like, especially in a subject that prizes analytical precision. Large cohorts and complex content increase operational pressure, but they do not reduce the need for clear, timely judgement.

Provide regular staff development on agreed criteria, combined with moderation using shared samples, to standardise decisions. Give students examples of graded work and a short rationale tied to rubric lines so expectations stay visible. That combination supports fairness and protects trust in the assessment process.

How do communication gaps affect grade appeals?

When examiners communicate poorly and appeal routes feel opaque, students are less likely to challenge grades even when they have valid concerns. The problem is not simply procedural. If criteria and rationales are hard to access or hard to interpret, students cannot tell whether an outcome reflects their work or a breakdown in communication.

Create a single source of truth for assessment information on the VLE, publish criteria with descriptors and weightings, and maintain a short FAQ for recurring queries. Encourage grade discussions anchored in the rubric and give staff guidance on clear, consistent explanations. A simpler, better-signposted appeal process helps students focus on evidence rather than procedure.

What would transparent, cohesive criteria look like?

Transparent criteria demystify evaluation and make fairness easier to see. Economics programmes need consistent criteria across exams and coursework, a clear map from learning outcomes to grade bands, and regular dialogue with students about how evidence is judged. When those elements are in place, students can concentrate on demonstrating economic reasoning rather than decoding the assessment system.

Publish annotated exemplars, release criteria with assessment briefs, hold short walk-throughs, and run feed-forward clinics before submissions. Calibrate markers, share a short "what we agreed" note, and update guidance when the same questions keep appearing. Sector evidence on marking criteria remains strongly negative (NSS category sentiment index −44.6), so visible, repeatable practice is the most credible way to improve confidence.

How Student Voice Analytics helps you

Student Voice Analytics shows how sentiment about marking criteria shifts over time and by cohort, site, or mode, with drill-downs from provider to school, programme, and module. You can compare economics with other disciplines and demographics, pinpoint where tone is most negative, and give programme teams clear evidence for changes to rubrics, exemplars, feedback turnaround, or marker calibration.

The platform produces concise, anonymised summaries for programme teams and boards, alongside export-ready outputs that help you communicate priorities and progress across the institution. If you want to see where marking clarity is breaking down, explore Student Voice Analytics or read the buyer's guide.

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.