What does an ability test measure?
Ability Tests (aka cognitive ability tests) measure candidates’ maximum performance of cognitive ability. The results from ability tests show how a candidate has performed in comparison to a diverse group of previous test takers.
This allows us to interpret the results in terms of what is typical within a given group. Each report contains details of the norm group candidates have been compared to.
How does an Ability Test work?
Ability Tests on the Clevry platform work through the randomisation of what we call forms. Forms are essentially made up of clusters of different questions, known as an item bank.
The selection of forms is randomised, such that different candidates are presented with a different set of forms. The ordering of the questions within the forms is then also randomised. This ensures that different candidates are presented with different selections of questions at random, which helps to ensure the security of our assessments, and prevent behaviours like cheating and collusion.
This also allows us to match different sets of questions for difficulty to ensure that all candidates receive assessments which are of the same level of difficulty overall, even if individual clusters or items vary.
Types of Ability Tests on Clevry
Ability Tests on our assessment platform come in 3 different levels; CWS (essential), B2C (enhanced) and Utopia (expert).
Basic comprehension assessment. Designed for blue-collar, manual and industrial roles.
Our most widely used assessment. Designed for entry and mid-level roles, such as customer service or administration.
The highest level of assessment we offer. Designed for graduates, professionals and specialists.
CWS (Essential) Ability Tests
The CWS (Essential) ability tests were designed primarily for manufacturing environments and public utilities. The instruments are suited to organisations which have a strong production or engineering focus.
Within this industrial sector, the instruments are relevant to a range of occupational groups:
Hourly paid operatives
First level supervisors
Many of the candidates who take the CWS tests are people who left school after achieving some qualifications at GCSE level.
This series of ability tests have been designed to be appropriate across a broad range of difficulty levels. This means that they are suitable for the assessment of a range of individuals; from those who have no educational qualifications to those who have achieved A-levels or post-school Diplomas/Certificates.
Candidates above this educational level (e.g. graduates) are likely to require tests which are different in nature, as well as difficulty, in order to reflect the responsibilities of the jobs for which they are being considered. There are 3 different assessments in this series.
B2C (Enhanced) Ability Tests
Our series of B2C Ability Tests has been developed to assess a range of different applications. The instruments in these assessments are relevant to a range of occupational groups and may be applied to various roles in which business administration skills are needed.
This may include but is not limited to:
Customer service staff
Call centre staff
B2C Ability Tests are particularly suited to situations in which employees are required to perform a wide range of tasks such as, attending to a customer and conducting various administrative duties. In such situations, employees are required to demonstrate both conscientiousness and reasoning skills.
The B2C tests have been designed to be appropriate across a broad range of difficulty levels. This means that they are suitable for the assessment of a range of individuals from those who have little or no educational qualifications to those who have achieved A-levels or post-school Diplomas/Certificates. Candidates above this educational level (e.g. graduates) are likely to require tests which are different in nature, as well as difficulty, in order to reflect the responsibilities of the jobs for which they are being considered.
The B2C (Enhanced) series is the most popular of our ability assessments. There are 4 different assessments in this series.
The initial versions were developed based on job analysis, conducted in-house by our business psychologists. Trialling began with applicants to an administrative post, who varied in educational qualifications, age and experience. Based on this trialling the items were revised for difficulty level and length of the assessment, and it also determined the incorrect answer options for the numerical ability test.
The second phase of trialling was conducted with a large and varied sample of both applicants and job incumbents from a range of industries. These candidates varied widely in age, length of work experience, and educational qualifications. The trialling resulted in minor changes to the assessments and was the basis for reliability analysis of the items.
The final versions were then published, but have been subject to continual development and changes based on subsequent analysis.
Utopia (Expert series of Ability Tests)
The Utopia series consists of high level critical reasoning tests. They measure abilities which are particularly relevant to the performance of:
The Utopia series has been designed to be appropriate for the assessment of individuals of graduate calibre. Candidates assessed using the instruments have typically achieved at least A-Levels or post-school Diplomas/Certificates.
There are 3 different assessments in this series.
Developing ability tests for high level candidates
The ability assessments in our Utopia series are based around a common scenario to maximise their face validity and interest.
The initial items were developed in partnership with an investment banking firm, based on job analysis of their high level roles. This was conducted in house by our team of business psychologists.
While the first rendition of paper and pencil items were piloted and standardised with this sponsor, the items then underwent further development to ensure they were suitable for wider test use, and where relevant involved further general market research into specific forms of assessments.
Pre-trialling involved piloting the items on a group of graduates, who provided feedback on the tests from a candidate’s perspective, and by members of our consultancy team who provided feedback from a professional test publisher perspective. Some changes were made to the tests in order to make them more candidate centred.
Next, the trialling versions of the tests were trialled on a sample of graduate candidates applying for roles in a variety of sectors. In both the tests, items were selected for the final versions on the basis of their contribution to the psychometric reliability of the overall measure, and their level of difficulty. Items were removed which were either too easy or too difficult.
The scenario above describes the initial development phase for each series of ability test. Since this time the assessments have undergone further development based on continual use and analysis. For more specific information about the development of our ability tests contact us using the details at the bottom of this page.
Verifying ability test scores for candidates
Once your candidates have completed a cognitive ability test, you can request they complete a quick verification test to measure the extent to which their scores are similar to their first completion of the assessment. This can be used if you suspect some form of collusion or cheating, or simply would like to measure the candidate’s ability twice to gather more information.
Verification tests are a shorter version of the original ability test the candidate completed, using different questions. The results of this assessment are presented in the ability test report alongside other ability test results.
Assessing cognitive ability accessibly
Our assessments use a “power test” philosophy, meaning that candidates answer increasingly difficult questions, with fair time limits. This minimises demands on reading speeds and processing time, a common source of test bias on those who read or process more slowly than others (for example, candidates for whom English is not their first language, have visual conditions, or a form of dyslexia/dyspraxia).
Our assessments measure underlying abilities, uncontaminated by reading/processing speed.
We also comply with UK DDA requirements to ensure maximum accessibility for respondents. The candidate interface conforms to level “Double-A” of the W3C accessibility guidelines. This means that all fonts are resizable and are fully compatible for users with accessibility devices such as screen readers and refreshable Braille.
Users can also change the contrast of the screen and background colour of the assessments, and test timers can be adjusted for candidates who may need increased time limits.
If you have any further questions about assessing cognitive ability on Clevry, please contact one of the team on 01273 734000 or email us via our contact page here.