For HR directors, talent assessment specialists, and organisational psychologists, the question of fairness in psychometric testing has never been more relevant. As organisations strive to attract and retain diverse talent, the conversation around neurodiversity has rightly moved to the forefront of inclusion strategy. Yet amid growing awareness, many practices continue to be guided by assumptions rather than evidence.
A new collaborative study between Clevry and Digit (Digital Futures at Work Research Centre) set out to test these assumptions head-on. The research, Neurodiversity and Online Employment Testing: How to Promote Fairness Through Good Psychometric and User Interface Design, explored how neurodivergent and neurotypical individuals experience online psychometric assessments. The results challenge several long-held beliefs and provide clear, evidence-based guidance for assessment designers and HR leaders alike.

Why this research matters
Psychometric tests play a central role in recruitment and development processes, from early-career hiring to executive selection. They offer objectivity, consistency, and scalability, qualities essential in fair and evidence-based decision making. However, questions have persisted about whether these tools inadvertently disadvantage neurodivergent candidates, such as those with ADHD, autism, or dyslexia.
Most organisations want to be inclusive, but many of their assessment practices have been shaped by inherited rules of thumb: add extra time, simplify tasks, or avoid certain test formats altogether. These approaches are well-intentioned, yet rarely grounded in empirical data. The current research sought to determine whether these assumptions hold up under scrutiny and, crucially, what HR professionals should be doing instead.
How the research was conducted
The project was divided into three interconnected studies, combining quantitative and qualitative methods.
- Study 1 examined how 314 neurodivergent and neurotypical participants performed on standard online reasoning and personality assessments. It measured test scores, time taken, ease of use, mood, and online behaviour.
- Study 2 used eight focus groups to explore the lived experiences of 23 participants, drawing out themes around stress, design, accessibility, and engagement.
- Study 3 investigated 56 participants’ experiences of online situational judgement tests (SJTs), analysing how design elements such as scenario length, context, and response format affected preference and difficulty.
Participants represented a range of neurodivergent conditions, with ADHD/ADD and autism most common. A steering committee of experts in both psychometrics and neurodiversity guided the project, ensuring methodological rigour and practical relevance.
Key findings
1. Few measurable differences in test performance
Perhaps the most striking outcome was the absence of major differences between neurodivergent and neurotypical participants across most metrics. Test scores, completion times, and perceived ease of use were broadly comparable. In some cases, neurodivergent individuals even completed numerical and abstract reasoning tests faster.
This finding runs counter to the widespread assumption that neurodivergent candidates automatically require extra time to complete assessments. It suggests that a universal extension policy may not only be unnecessary but could distort test standardisation.
2. Mood differences exist, but experience is consistent
Neurodivergent participants reported lower mood before and after taking assessments, though the tests themselves did not worsen mood. This pattern implies that baseline emotional factors, rather than the test design, may explain differences in self-reported comfort.
In practice, this means organisations should not assume psychometrics themselves create inequity, but should still recognise the role that test anxiety, confidence, and prior experiences can play in shaping candidate perceptions.
3. Design quality matters for everyone
Both neurodivergent and neurotypical participants highlighted similar issues that affect user experience:
- Timers can elevate stress and reduce concentration.
- Overly complex or ambiguous wording increases cognitive load.
- Visually engaging yet clear interfaces support focus and understanding.
- Practice materials and preparatory information reduce uncertainty and anxiety.
The message is clear: good test design benefits all candidates, not just those who are neurodivergent.
4. Preferences in situational judgement tests are broadly shared
When it came to SJTs, participants across both groups preferred shorter, contextualised scenarios that felt relevant to real workplace situations. Opinions were mixed on response formats, some favoured “most/least” options, others rating scales, but preferences were not strongly linked to neurodivergence type.
Interestingly, autistic and dyslexic participants reported finding highly hypothetical scenarios more cognitively demanding, highlighting the importance of realism and context in test design.
What this means for HR and assessment leaders
Evidence over assumption
The long-standing practice of applying blanket accommodations, such as automatically extending time limits for neurodivergent candidates, is not supported by this evidence. Instead, the research advocates for a case-by-case approach where adjustments are discussed individually and tailored to specific needs.
Reaffirming confidence in existing psychometrics
The results provide reassurance that well-designed, validated assessments are not inherently biased against neurodivergent individuals. This is an important finding for HR and talent leaders who rely on psychometrics as part of evidence-based hiring. It means fairness can be achieved through good practice and consistent administration, rather than wholesale redesign of testing frameworks.
The value of preparation and transparency
Providing candidates with clear information, example items, and guidance on how to navigate assessments reduces stress and promotes perceived fairness. This is particularly valuable for neurodivergent individuals, but universally improves candidate experience.
Focus on the test environment, not just the test
Allowing candidates autonomy to choose where and when they complete online assessments, for example, in a quiet space or at a preferred time of day, can make a significant difference. Flexibility supports comfort without undermining standardisation.
Implications for assessment providers
For psychometric publishers and platform developers, the study underscores the importance of design simplicity, accessible interfaces, and evidence-based adjustments.
A few practical principles emerge:
- Clarity trumps complexity: Remove unnecessary linguistic or numerical hurdles unrelated to the construct being measured.
- Timers should serve measurement, not anxiety: Use them only where time is psychometrically relevant.
- Engagement aids performance: Visual clarity and good interface design help all users, particularly those with attention-related differences.
- Testing conditions matter: Consider guidance on environmental factors, lighting, noise, and time of day, to enhance standardisation and comfort simultaneously.
By embedding these principles, providers can ensure inclusivity without compromising reliability or validity.
Beyond compliance: the business case for inclusivity
From an organisational perspective, inclusive testing is not solely an ethical or legal imperative, it’s a strategic advantage. Fair assessments expand access to under-represented talent pools and signal a culture of openness and respect.
Reducing barriers for neurodivergent candidates means attracting individuals who often bring exceptional creativity, analytical ability, and problem-solving skills. Furthermore, when candidates perceive a process as fair, they are more likely to speak positively about the organisation, strengthening employer brand reputation in competitive labour markets.
Inclusive assessment also protects against the reputational risk of poorly evidenced accommodations. A consistent, transparent, and scientifically grounded approach ensures defensibility and trust across all stakeholders, candidates, hiring managers, and regulators alike.
Moving the conversation forward
While the findings are encouraging, they also highlight areas for future exploration. The studies were conducted in a low-stakes context; further research in real selection settings will be crucial to confirm how stress, anxiety, and perceived pressure interact with performance. There is also scope to expand representation of specific neurodivergent conditions, particularly dyslexia, dyspraxia, and Tourette’s syndrome.
One particularly interesting avenue is the relationship between time limits and score equivalence. If neurodivergent candidates are, on average, faster, not slower, than their peers, current assumptions about “extra time” need thorough re-examination. Future studies will test the point at which time constraints begin to meaningfully alter performance, providing a stronger empirical foundation for any adjustments.
What HR leaders should take away
- Current best practice works, when applied consistently. Psychometric tests built and validated to professional standards already offer fair measurement across diverse groups.
- Adjustments should be individual, not automatic. Ask candidates what they need; avoid blanket policies.
- Candidate experience is universal. Clarity, preparation, and communication benefit everyone.
- Inclusive design strengthens business outcomes. Fair testing isn’t just compliance, it drives access to wider talent and enhances brand equity.
The debate about neurodiversity and online assessment has often been dominated by anecdote and assumption. This research provides the clarity many HR and assessment leaders have been waiting for. Neurodivergent and neurotypical candidates, it turns out, experience online psychometrics more similarly than differently.
That insight doesn’t diminish the need for inclusion; it strengthens it. It shows that fairness arises not from over-compensation or token adjustments, but from intelligent design, evidence-based practice, and respect for individual difference.
For organisations serious about inclusive hiring, the message is straightforward: focus on good science, good design, and good dialogue with your candidates. Fairness will follow.
As part of the Digital Futures at Work Research Centre (Digit), this work was supported by the UK Economic and Social Research Council [grant number ES/S012532/1], which is gratefully acknowledged.