Yes. AI students want transparent, consistent criteria, prompt feedback and industry‑relevant assessment, and sector evidence shows these gaps persist. Across the National Student Survey (NSS, the UK‑wide final‑year student survey) open‑text, the marking criteria theme draws about 13,329 comments (≈3.5% of 385,317), with 87.9% negative and an index of −44.6, indicating widespread concerns about how criteria are presented and applied. Within Artificial Intelligence, a subject area used for like‑for‑like sector comparisons, marking criteria accounts for ≈7.5% of comments and is even more negative (≈−54.0), while feedback is the largest assessment topic (≈9.0%, −44.8). Students often praise teaching staff (+34.3), so the issue centres on standards and process rather than expertise. These insights shape the practical steps below on fairness, timeliness and relevance.
How do AI students define fair marking?
Transparency and consistency in grading sit at the forefront for artificial intelligence students. They want grading standards tied to predefined criteria and aligned to learning outcomes. Engage with the student voice to surface where rubrics and standards do not land as intended. Publish annotated exemplars at key grade bands, release criteria with the assessment brief, and run marker calibration with a shared sample bank. When students see the same benchmark applied across assessors and modules, they trust the process and can focus on improving their work.
Why does timely feedback matter in AI?
Feedback is a tool for learning, especially when students grasp complex AI concepts and apply them in real‑world scenarios. Quick and detailed feedback after assignments lets students know where they stand and what improvements they need to make. Delayed feedback hampers students' ability to adjust their learning strategies or correct misunderstandings. Staff in AI courses should aim to give feedback promptly, ideally within a week or two of submission, so students can act on the insights provided while the assignments are still fresh in their minds. Systems that support efficient turnaround and feed‑forward close the loop and improve outcomes.
What does unambiguous marking criteria look like?
Unambiguous criteria use checklist‑style rubrics with concrete descriptors, weightings, and common error notes. For innovative and technical AI work, exemplars that show “what good looks like” reduce ambiguity. Provide a short “how your work was judged” summary with each grade, referencing the rubric lines applied. This supports consistent marking, helps students align their effort with expectations, and reduces re‑mark and query volumes.
How should criteria stay consistent across courses?
Variability in standards undermines students’ ability to gauge progress. Standardise criteria across modules where learning outcomes overlap and explain any intentional differences up front. A brief walk‑through of criteria with the assessment brief, a short Q&A, and shared “what we agreed” notes from marker calibration signal consistency. Track recurring queries in an FAQ on the VLE to close the loop.
How should assessments align with industry requirements?
AI students want assessment that mirrors real‑world challenges. Review projects and problem‑solving tasks with external input so they reflect current practice and tooling. Regular consultation with industry partners, alongside explicit mapping of criteria to employability skills, ensures relevance and helps students translate feedback into artefacts for portfolios and interviews.
Where does auto-marking software fall short?
Auto‑marking can miss nuanced, creative or atypical solutions that characterise AI coursework. Over‑reliance risks mismatches between criteria and judgement. Use automated checks for defined elements, but maintain substantive human oversight for complex work so the grade reflects performance against the rubric and sustains trust in the process.
How does lecturer behaviour shape student trust?
Students notice when lecturers are visible, approachable and transparent about standards. Ambiguities in criteria, perceived as arbitrary or inconsistent, increase stress and deter dialogue. Regular calibration, openness about how criteria are interpreted, and proactive offers of clarification during timetabled sessions build confidence. Teaching staff are well regarded; making the application of standards as visible as the teaching strengthens that trust.
What should providers do next?
Prioritise visible, consistent criteria and calibration. Publish annotated exemplars, use unambiguous rubrics, release criteria with the brief, and offer short feed‑forward opportunities. Maintain prompt feedback cycles and provide a short “how your work was judged” note with each return. Standardise where learning outcomes overlap, highlight any intentional differences, and maintain a VLE FAQ that addresses recurring student queries. Monitor NSS open‑text and internal channels to target modules and cohorts where tone on criteria and feedback trends most negative.
How Student Voice Analytics helps you
Student Voice Analytics shows how sentiment on marking criteria and assessment moves over time and by cohort, site or mode, with drill‑downs from provider to school, department and programme. It enables like‑for‑like comparisons for Artificial Intelligence against the wider sector and by demographics to target cohorts where tone is most negative. You can export concise, anonymised summaries for programme teams and boards with ready‑to‑use tables and year‑on‑year movement, so priorities and progress are straightforward to share and act on.
Request a walkthrough
See all-comment coverage, sector benchmarks, and governance packs designed for OfS quality and NSS requirements.
© Student Voice Systems Limited, All rights reserved.