Updated Apr 09, 2026
A survey analytics tweak can look minor until teams use it to judge burden, spot fieldwork problems, and decide whether feedback data is reliable enough to act on. Jisc’s switch from mean to median response time matters because it changes one of the quickest signals institutions see when a student survey is live. On 6 March 2026, Jisc updated the Online Surveys change log for version 3.34.2. The release notes say Jisc Online Surveys now shows median response time on the Insights page instead of the mean, and adds new standalone question types for single-choice and multi-choice Choice and Grid questions. For Student Experience teams, the takeaway is practical: small platform changes can affect how you judge survey burden, spot fieldwork problems, and decide whether student feedback data is robust enough to act on.
The 6 March release contains three user-facing changes. Jisc says it has added standalone question types for single-choice and multi-choice Choice and Grid questions, redesigned the Add item menu to accommodate them, and changed the Average response time on the Insights page to display the median instead of the mean. It also changes the response-time display format to HH MM SS. Taken together, those changes give survey teams a slightly clearer setup workflow and a more reliable operational signal for monitoring live surveys.
"Changed the Average response time on the Insights page to display the median instead of the mean."
This sits inside Jisc's wider Insights feature, which Jisc introduced on 19 November 2025 as a way to quickly see how a survey is performing. It also follows Jisc's earlier file upload update for student feedback surveys, which changed what evidence teams can collect at the point of response. The scope here is operational, not regulatory. It does not change NSS, PTES, PRES, UKES, or OfS guidance. It affects institutions using Jisc Online Surveys for local module evaluations, pulse surveys that support earlier intervention, service feedback, or other student experience questionnaires. That distinction matters because teams can adjust local practice without confusing a platform update with a change in national survey rules.
Jisc does not explain the rationale for the switch from mean to median in the release note. In practice, the likely effect is a more stable picture of response behaviour, because median response time is less sensitive to a small number of unusually long sessions, paused responses, or abandoned survey windows than the mean. For institutions, that means a sturdier benchmark when deciding whether a questionnaire is genuinely too demanding or simply affected by a few outlier sessions.
The first implication is that survey teams should revisit how they interpret platform analytics. If you use response-time metrics to judge whether a questionnaire is too long or confusing, median is usually a better operational signal than mean. It will not eliminate poor survey design, but it should reduce the risk that a few outlier responses make a reasonable instrument look more burdensome than it is. The benefit is a cleaner basis for deciding when a survey really needs to be shortened or simplified.
The second implication is comparability. If your institution tracks survey performance across waves, schools, or question sets, log the release date of the platform change and avoid comparing pre-March and post-March response-time figures without context. That is the same basic discipline we recommend in our NSS open-text analysis methodology: keep version control clear so that method changes do not get mistaken for experience changes. Do that consistently, and you are less likely to misread a platform change as a shift in student behaviour.
The third implication is survey design governance. The new standalone question types are a prompt to review shared templates, especially if multiple teams build module evaluations or service surveys. A clearer choice between single-answer and multi-answer formats should help reduce setup mistakes, but only if local guidance is updated as well. Our summary of how teaching evaluation surveys work better when students and staff help design them is a useful reminder that question format and question purpose need to be reviewed together. For a related methodological lens, our summaries on what gets students to fill in teaching evaluations and non-response bias in student evaluations are useful reminders that response data is only valuable when the instrument is both usable and representative. The practical payoff is fewer avoidable survey errors and stronger evidence when results are reviewed centrally.
At Student Voice AI, we see a consistent pattern: when a survey is too long, badly structured, or hard to complete on mobile, the quality of the open-text often falls with it. Comments become shorter, thinner, or vanish entirely. A platform metric such as median response time is not a substitute for response-rate monitoring or text analysis, but it can be a useful early warning sign that a survey needs simplifying. That makes it valuable not as a headline metric, but as a prompt to check whether the evidence you are collecting is still usable.
The practical next step is to keep the collection and analysis loop joined up. Use platform analytics to refine the instrument, then analyse open comments with a governed method so you can see what students are actually saying and whether changes improved the evidence you collect. If you want to connect Jisc survey operations with stronger comment analysis, see how Student Voice Analytics helps institutions analyse module evaluation, pulse survey, and service feedback comments with one reproducible method. Then use our student comment analysis governance checklist to align that work across teams.
Q: What should institutions using Jisc Online Surveys do now?
A: Review any guidance or templates used for student surveys, note the 6 March 2026 release in your method log, and brief local survey owners that Insights now reports median response time. If you compare survey burden across waves, keep that change in view so you do not mistake a dashboard update for a change in student behaviour.
Q: When does this change apply, and does it affect national surveys such as NSS or PRES?
A: The change was recorded in Jisc Online Surveys version 3.34.2 on 6 March 2026. It affects institutions using Jisc Online Surveys, but it does not change the methodology of NSS, PTES, PRES, UKES, or other national surveys.
Q: What is the broader implication for student voice practice?
A: The update is a reminder that student voice quality depends on survey operations as much as survey questions. Better instrument design, cleaner monitoring, and consistent text analysis all help institutions collect feedback that is easier to trust and act on.
[Jisc Online Surveys]: "Change log" Published: 2026-03-06
[Jisc Online Surveys]: "Introducing Insights: see how your survey is performing" Published: 2025-11-19
Source URL: https://onlinesurveys.jisc.ac.uk/change-log/
Request a walkthrough
See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.
UK-hosted · No public LLM APIs · Same-day turnaround
Research, regulation, and insight on student voice. Every Friday.
© Student Voice Systems Limited, All rights reserved.