Updated Apr 10, 2026
Graduate outcomes arrive too late to help the cohort that raised the concern. The earlier warning often appears in student feedback: careers support feels generic or badly timed, placements feel out of reach, or a course is not building confidence for the workplace. That is why Tran Le Huu Nghia, Pham Lan Anh and Nguyen Thi My Duyen's paper in Higher Education, “Measuring students’ self-perceived employability capital attainment: the development and validation of a scale”, matters. For UK universities using student feedback to improve the student experience, it asks a practical question: how do you measure employability development while students are still studying, so teams can act before weak signals harden into poor outcomes?
Universities have spent years building employability into curricula, placements, careers services, and extracurricular activity. Evaluation still lags behind delivery. Graduate outcomes data arrives late, and local surveys often rely on a few improvised questions that do not add up to a clear picture of what students think they have actually developed.
This paper addresses that gap by constructing and validating the Self-Perceived Employability Capital Attainment (SPECA) scale. The core research problem is straightforward: if institutions want to tailor employability provision and help students identify areas for growth, they need a defensible way to assess attainment, not just participation or destination. In practice, that gives teams a better chance of spotting weak provision while current students can still benefit from changes.
The paper begins from a measurement problem, not from another employability intervention. The authors note that universities already invest in employability through many initiatives, but still lack strong instruments for assessing what students believe they have gained from those efforts.
"there remains a lack of robust tools to assess students’ attainment of employability capital"
The authors respond by building a 20-item scale across three studies involving 1,719 participants. That matters because the paper is not presenting a single pilot questionnaire. It reports a staged validation process that tests whether the measure is usable, coherent, and more rigorous than ad hoc employability items added to a survey at the last minute.
Content validity was built deliberately through both evidence and stakeholder input. The abstract reports a systematic literature review plus consultation with career experts and students. That is an important design choice. It means the scale is not only statistically tested. It is also grounded in how employability is discussed in research and understood by the people expected to act on the results.
The psychometric testing is broader than many institutional surveys ever receive. Exploratory and confirmatory factor analyses supported the intended structure, and the paper also reports evidence of reliability, convergent validity, and discriminant validity. In practical terms, that suggests the scale is doing more than capturing a vague sense of confidence or satisfaction. It is trying to measure a distinct construct with internal coherence, which makes any follow-up action easier to justify.
The paper also tests whether the instrument holds up beyond one-off administration. By examining nomological validity and temporal stability, the authors move the discussion from “does this look sensible?” to “does this behave like a real measurement instrument over time and in relation to other variables?” For Student Experience and Careers teams, that is the difference between an interesting questionnaire and a tool that could support longitudinal tracking and better-timed intervention.
For UK higher education, the first implication is that employability should be measured earlier and more directly than graduate outcomes alone allow. Graduate Outcomes data is useful, but it is a delayed signal. If institutions want to know whether curriculum changes, placement reforms, or careers interventions are working, they need a way to assess perceived development during the course. That gives teams a chance to adjust support while students are still enrolled.
Second, a validated scale is most useful when paired with open-text student voice. A measure like SPECA can show where confidence or capability feels weaker, but it cannot explain why. That is where qualitative feedback becomes operationally useful. Open comments can reveal whether the problem sits in placement access for under-served groups, employer links, unclear skills mapping, weak feedback, or uneven careers support across cohorts. Used together, the score shows where to look, and the comments show what to fix. Student Voice Analytics helps teams group employability-related comments at scale and compare those themes against survey results.
Third, segmentation matters. An institutional average can hide major differences between commuter and residential students, between widening participation groups, between disciplines, or between students with and without placement experience. If a university adopts a stronger employability measure, it should plan from the outset to review results by cohort and connect them to open-text evidence. That leads to fairer decisions and better-targeted support.
Finally, institutions should treat validated instruments carefully rather than as plug-and-play templates. The paper is valuable precisely because it takes measurement seriously. UK teams should do the same: pilot locally, review wording, and check how well the instrument fits their own context, ideally by co-designing survey questions with students and staff, before treating results as board-level evidence. That protects credibility when results reach senior leaders or external reviewers.
Q: How can a university use a scale like SPECA without overloading students with another long survey?
A: The best route is usually integration, not addition. A university could pilot a small employability block within an existing student experience, careers, or placement survey, then pair the scaled responses with one or two open-text prompts about what has most helped or hindered students' work-readiness. That keeps the burden manageable and gives teams evidence they can act on.
Q: What should institutions be cautious about when adapting a validated employability scale?
A: A validated instrument is a strong starting point, not a licence to stop checking. Institutions should review wording for local relevance, pilot the scale with their own students, and be careful about comparing scores across very different disciplines or cohorts without further testing. The paper's value lies in its rigour, and local use should preserve that rigour.
Q: What is the broader implication for student voice work on employability?
A: It suggests that employability should be treated as part of the student experience, not just as an outcome after graduation. When students comment on placements, feedback, employer contact, or career guidance, they are often describing the conditions that shape employability capital in practice. Combining robust survey instruments with systematic comment analysis gives universities a clearer basis for improvement and a stronger case for acting early.
[Paper Source]: Tran Le Huu Nghia, Pham Lan Anh and Nguyen Thi My Duyen "Measuring students’ self-perceived employability capital attainment: the development and validation of a scale" DOI: 10.1007/s10734-026-01646-w
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.