Glossary: Pre-employment assessment definitions and related terms

9 minute read

Posted by Chris Platts on 7 April 2023

Welcome to ThriveMap’s glossary of pre-employment assessment terms.

Here we have defined 60+ industry terms into three categories:

  • Essential
  • Advanced
  • Expert

How many of the terms do you know?

The essentials

Pre-employment assessment: A pre-employment assessment is an evaluation or test that employers use to gather information about job candidates before making a hiring decision.

These assessments are designed to measure various attributes, skills, and traits that are relevant to the job role, helping employers determine whether a candidate is a good fit for the position and the company as a whole.

ThriveMap’s pre-employment assessments are crucial for high-volume recruitment. Companies use ThriveMap to outsource part of their recruitment process (RPO) provider because we are experts at creating a Realistic Job Assessment. Our expert services and tools reduce time-to-hire, automate high-volume candidate screening, and minimise hiring costs.

Furthermore, our pre-employment assessments are proven to remove hiring bias from the screening and assessment process. Organisations using ThriveMap’s pre-employment assessments find they have a more diverse workforce, who have realistic expectations of the role they are joining. This means they experience lower attrition, a happier culture, and a more profitable operation.

Learn more about ThriveMap here, or scroll down for our full pre-employment assessment glossary below.

Cognitive ability test: A type of pre-employment assessment that measures a candidate’s problem-solving, critical thinking, and reasoning abilities. More about cognitive ability tests here.

Personality test: An assessment that evaluates a candidate’s personality traits, such as openness, conscientiousness, extraversion, agreeableness, and neuroticism, to determine their fit for a specific job or company culture. More about personality testing.

Aptitude test: A pre-employment assessment that measures a candidate’s potential to learn new skills and perform well in a particular job role.

Skill test: An assessment that measures a candidate’s proficiency in specific job-related skills, such as computer programming, data analysis, or customer service.

Job simulation: A type of pre-employment assessment that replicates real-world job tasks and scenarios to evaluate a candidate’s ability to perform in a specific role.

Cultural fit assessment: An evaluation that measures a candidate’s alignment with a company’s values, beliefs, and work environment to ensure a successful long-term fit.

Behavioral interview: A structured interview technique that asks candidates to provide examples of past experiences and behaviors to predict their future performance in a specific job role.

Psychometric testing: A broad category of pre-employment assessments that measure a candidate’s cognitive abilities, personality traits, and other psychological factors to predict job performance.

Validity: The extent to which a pre-employment assessment accurately measures what it intends to measure and can predict job performance.

Reliability: The consistency and stability of a pre-employment assessment’s results over time, ensuring that it produces accurate and dependable outcomes.

Test norms: A set of data that provides a benchmark for comparing an individual candidate’s pre-employment assessment results to those of other candidates or a specific population.

Adverse impact: The potential for a pre-employment assessment to unintentionally discriminate against certain groups of candidates, such as those based on race, gender, or age.

Job analysis: A systematic process of identifying the skills, knowledge, and abilities required for a specific job role to develop appropriate pre-employment assessments.

Cut-off score: A predetermined threshold or minimum score on a pre-employment assessment that a candidate must achieve to be considered for a job role.

Job performance: The effectiveness with which an employee carries out the tasks and responsibilities associated with their role, as well as their overall contribution to the organisation’s objectives.

Employee engagement: The level of commitment, involvement, and enthusiasm an employee has toward their job and the organization they work for.

Situational judgment test: A type of pre-employment assessment that presents candidates with hypothetical, job-related scenarios and asks them to choose the most appropriate course of action.

Quality of hire: A measure of how well a new employee performs in their role, their level of engagement, and their overall contribution to the organization’s success.

Time-to-fill: The amount of time it takes to fill a job vacancy, from the start of the recruitment process to the successful candidate’s first day of employment.

Time-to-productivity: The length of time it takes for a new employee to reach their full potential and contribute effectively to the organization.

Cost-per-hire: The total expenses associated with hiring a new employee, including advertising, recruitment agency fees, screening and assessment costs, and onboarding expenses.

cost per hire

New hire attrition: The turnover of employees who leave a company within a short period after being hired, often due to poor job fit or unmet expectations.

Assessment center: A selection method that involves multiple evaluators and a series of exercises or assessments designed to evaluate a candidate’s job-related competencies and skills.

Test fairness: The degree to which a pre-employment assessment is free from bias and provides an equal opportunity for all candidates to demonstrate their abilities, regardless of their background or personal characteristics.

Test security: The measures taken to protect the integrity of a pre-employment assessment, such as ensuring confidentiality, preventing cheating, and safeguarding test content.

Advanced terms

Content validity: The extent to which a pre-employment assessment’s content is representative of and relevant to the job for which it is being used.

Criterion-related validity: The degree to which a pre-employment assessment’s results can accurately predict a candidate’s future job performance or other relevant outcomes.

Construct validity: The extent to which a pre-employment assessment accurately measures the underlying psychological traits or constructs it is designed to evaluate.

Face validity: The degree to which a pre-employment assessment appears, on the surface, to measure what it is intended to measure, making it more likely to be accepted by candidates and employers.

Voluntary attrition: The turnover of employees who leave a company by choice, such as for personal reasons or to pursue other job opportunities.

Involuntary attrition: The turnover of employees who are terminated by the employer due to performance issues, layoffs, or other organizational reasons.

Biographical data questionnaire: A pre-employment assessment that collects information about a candidate’s personal history, experiences, and achievements to predict their potential job performance and fit.

Computer Adaptive Testing (CAT): A type of assessment that adjusts the difficulty level of questions based on a candidate’s previous responses, providing a more accurate measurement of their abilities in a shorter amount of time.

Item Response Theory (IRT): A statistical framework used to model the relationship between a candidate’s underlying ability and their performance on assessment items, often used in the development of computer adaptive tests.

Test-retest reliability: A measure of the consistency of a pre-employment assessment’s results when administered to the same candidate at different times.

Inter-rater reliability: The consistency of scores assigned by different raters or evaluators when assessing the same candidate’s performance on a pre-employment assessment.

Convergent validity: The degree to which two different assessments measuring the same underlying construct yield similar results.

Discriminant validity: The extent to which a pre-employment assessment can distinguish between candidates who possess different levels of the trait or ability being measured.

Standard error of measurement (SEM): An estimate of the amount of error associated with a candidate’s test score, used to determine the precision of the assessment.

Pre-test/post-test design: A research design in which candidates’ abilities are assessed before and after an intervention (such as training or education) to determine the effectiveness of the intervention.

Test blueprint: A document that outlines the structure and content of a pre-employment assessment, including the types of questions, the weighting of different sections, and the overall difficulty level.

Test equating: A statistical process used to adjust the scores of different forms or versions of a pre-employment assessment to ensure that they are comparable and can be used interchangeably.

Norm-referenced scoring: A scoring method that compares a candidate’s performance on a pre-employment assessment to the performance of a reference group or norm group.

Criterion-referenced scoring: A scoring method that compares a candidate’s performance on a pre-employment assessment to a predefined set of criteria or standards.

Expert mode

Anchored rating scales: A type of rating scale used in assessments that provides specific behavioral examples or descriptions for each level of performance, reducing subjectivity and increasing the consistency of ratings among evaluators.

Differential Item Functioning (DIF): A statistical analysis that examines whether different groups of candidates respond differently to individual assessment items, potentially indicating the presence of bias or unfairness in the test.

Utility analysis: A quantitative method used to evaluate the effectiveness of a pre-employment assessment or selection method by estimating its impact on organizational outcomes, such as productivity and cost savings.

Multitrait-multimethod matrix (MTMM): A research design used to evaluate the convergent and discriminant validity of multiple assessment methods measuring multiple constructs or traits.

Response process validity: An aspect of validity that examines whether candidates interpret and respond to assessment items in the way they were intended, providing evidence that the test is measuring the intended construct.

Test accommodations: Modifications made to the administration of a pre-employment assessment to ensure that candidates with disabilities or other special needs have an equal opportunity to demonstrate their abilities.

Test speediness: A characteristic of a pre-employment assessment that requires candidates to complete the test within a specified time limit, potentially influencing their performance and the interpretation of their scores.

Rasch model: A specific type of Item Response Theory (IRT) model used to estimate the difficulty of test items and the ability of candidates based on their responses to those items.

Domain sampling theory: A theoretical framework that underlies the development of criterion-referenced tests, positing that a test should sample a representative set of tasks or content from the domain of interest.

Confidence interval: A range of scores within which a candidate’s true score on a pre-employment assessment is likely to fall, with a specified level of confidence, accounting for the potential measurement error associated with the test.

Classical Test Theory (CTT): A traditional approach to test development and scoring that assumes a candidate’s observed test score is composed of their true score and a random error component.

Forced-choice assessment: A type of pre-employment assessment that presents candidates with multiple statements or options and requires them to choose the one that best describes their preferences or behaviors, reducing the impact of social desirability bias.

Social desirability bias: The tendency for candidates to respond to assessment items in a way that makes them appear more favorable or socially acceptable, potentially distorting their true scores.

Test form equivalence: The degree to which different forms or versions of a pre-employment assessment are interchangeable, providing comparable results and maintaining the same level of difficulty.

Universal design for assessment: A set of principles and guidelines aimed at ensuring that pre-employment assessments are accessible, inclusive, and fair for all candidates, regardless of their background or abilities.

Share

The ThriveMap Newsroom

Subscribe for insights, debunks and what amounts to a free, up-to-date recruitment toolkit.

About ThriveMap

ThriveMap creates customised assessments for high volume roles, which take candidates through an online “day in the life” experience of work in your company. Our assessments have been proven to reduce staff turnover, reduce time to hire, and improve quality of hire.

Not sure what type of assessment is right for your business? Read our guide.

Other articles you might be interested in

Banner image for this post

5 tips to reduce no-shows

No-shows disrupt the hiring process, waste valuable internal resources, and frustrate the hiring team. In this article, we explain what a no-show is and share five actionable tips to reduce your no-show rate. What do we mean by ‘no-shows’? A ‘no-show’ occurs when a candidate fails to attend a scheduled interview, assessment, orientation session, or […]

Continue reading
Banner image for this post

10 illegal interview question areas and how to avoid them

Job interviews can be minefields for potential legal issues if employers aren’t cautious. Understanding which questions are off-limits according to Equal Employment Opportunity Commission (EEOC) guidelines is crucial to conducting fair and effective interviews. While personal small talk can be a natural part of the interview process, it’s important that any personal questions do not […]

Continue reading
Banner image for this post

Best recruitment methods for call centres

Call centers are the backbone of customer service for many businesses, handling a myriad of enquiries and concerns daily. However, the traditional recruitment methods used by most call centers often fall short, leading to challenges such as high attrition rates and mismatched skill sets among employees. Pitfall of current call centre recruitment methods At the […]

Continue reading

View all articles