Welcome to our comprehensive glossary of pre-employment assessment terms. This resource is designed to help you familiarize yourself with the key concepts, techniques, and terminology in the field of pre-hire assessments. Whether you are new to the world of assessments or looking to expand your understanding, this glossary will provide you with explanations of the essential, intermediate and expert terms.
Pre-employment assessment: A test or evaluation used during the hiring process to measure a candidate’s skills, abilities, personality traits, or cultural fit before they are offered a job.
Cognitive ability test: A type of pre-employment assessment that measures a candidate’s problem-solving, critical thinking, and reasoning abilities.
Personality test: An assessment that evaluates a candidate’s personality traits, such as openness, conscientiousness, extraversion, agreeableness, and neuroticism, to determine their fit for a specific job or company culture.
Aptitude test: A pre-employment assessment that measures a candidate’s potential to learn new skills and perform well in a particular job role.
Skill test: An assessment that measures a candidate’s proficiency in specific job-related skills, such as computer programming, data analysis, or customer service.
Job simulation: A type of pre-employment assessment that replicates real-world job tasks and scenarios to evaluate a candidate’s ability to perform in a specific role.
Cultural fit assessment: An evaluation that measures a candidate’s alignment with a company’s values, beliefs, and work environment to ensure a successful long-term fit.
Behavioral interview: A structured interview technique that asks candidates to provide examples of past experiences and behaviors to predict their future performance in a specific job role.
Psychometric testing: A broad category of pre-employment assessments that measure a candidate’s cognitive abilities, personality traits, and other psychological factors to predict job performance.
Validity: The extent to which a pre-employment assessment accurately measures what it intends to measure and can predict job performance.
Reliability: The consistency and stability of a pre-employment assessment’s results over time, ensuring that it produces accurate and dependable outcomes.
Test norms: A set of data that provides a benchmark for comparing an individual candidate’s pre-employment assessment results to those of other candidates or a specific population.
Adverse impact: The potential for a pre-employment assessment to unintentionally discriminate against certain groups of candidates, such as those based on race, gender, or age.
Job analysis: A systematic process of identifying the skills, knowledge, and abilities required for a specific job role to develop appropriate pre-employment assessments.
Cut-off score: A predetermined threshold or minimum score on a pre-employment assessment that a candidate must achieve to be considered for a job role.
Job performance: The effectiveness with which an employee carries out the tasks and responsibilities associated with their role, as well as their overall contribution to the organization’s objectives.
Employee engagement: The level of commitment, involvement, and enthusiasm an employee has toward their job and the organization they work for.
Situational judgment test: A type of pre-employment assessment that presents candidates with hypothetical, job-related scenarios and asks them to choose the most appropriate course of action.
Quality of hire: A measure of how well a new employee performs in their role, their level of engagement, and their overall contribution to the organization’s success.
Time-to-fill: The amount of time it takes to fill a job vacancy, from the start of the recruitment process to the successful candidate’s first day of employment.
Time-to-productivity: The length of time it takes for a new employee to reach their full potential and contribute effectively to the organization.
Cost-per-hire: The total expenses associated with hiring a new employee, including advertising, recruitment agency fees, screening and assessment costs, and onboarding expenses.
New hire attrition: The turnover of employees who leave a company within a short period after being hired, often due to poor job fit or unmet expectations.
Assessment center: A selection method that involves multiple evaluators and a series of exercises or assessments designed to evaluate a candidate’s job-related competencies and skills.
Test fairness: The degree to which a pre-employment assessment is free from bias and provides an equal opportunity for all candidates to demonstrate their abilities, regardless of their background or personal characteristics.
Test security: The measures taken to protect the integrity of a pre-employment assessment, such as ensuring confidentiality, preventing cheating, and safeguarding test content.
Content validity: The extent to which a pre-employment assessment’s content is representative of and relevant to the job for which it is being used.
Criterion-related validity: The degree to which a pre-employment assessment’s results can accurately predict a candidate’s future job performance or other relevant outcomes.
Construct validity: The extent to which a pre-employment assessment accurately measures the underlying psychological traits or constructs it is designed to evaluate.
Face validity: The degree to which a pre-employment assessment appears, on the surface, to measure what it is intended to measure, making it more likely to be accepted by candidates and employers.
Voluntary attrition: The turnover of employees who leave a company by choice, such as for personal reasons or to pursue other job opportunities.
Involuntary attrition: The turnover of employees who are terminated by the employer due to performance issues, layoffs, or other organizational reasons.
Biographical data questionnaire: A pre-employment assessment that collects information about a candidate’s personal history, experiences, and achievements to predict their potential job performance and fit.
Computer Adaptive Testing (CAT): A type of assessment that adjusts the difficulty level of questions based on a candidate’s previous responses, providing a more accurate measurement of their abilities in a shorter amount of time.
Item Response Theory (IRT): A statistical framework used to model the relationship between a candidate’s underlying ability and their performance on assessment items, often used in the development of computer adaptive tests.
Test-retest reliability: A measure of the consistency of a pre-employment assessment’s results when administered to the same candidate at different times.
Inter-rater reliability: The consistency of scores assigned by different raters or evaluators when assessing the same candidate’s performance on a pre-employment assessment.
Convergent validity: The degree to which two different assessments measuring the same underlying construct yield similar results.
Discriminant validity: The extent to which a pre-employment assessment can distinguish between candidates who possess different levels of the trait or ability being measured.
Standard error of measurement (SEM): An estimate of the amount of error associated with a candidate’s test score, used to determine the precision of the assessment.
Pre-test/post-test design: A research design in which candidates’ abilities are assessed before and after an intervention (such as training or education) to determine the effectiveness of the intervention.
Test blueprint: A document that outlines the structure and content of a pre-employment assessment, including the types of questions, the weighting of different sections, and the overall difficulty level.
Test equating: A statistical process used to adjust the scores of different forms or versions of a pre-employment assessment to ensure that they are comparable and can be used interchangeably.
Norm-referenced scoring: A scoring method that compares a candidate’s performance on a pre-employment assessment to the performance of a reference group or norm group.
Criterion-referenced scoring: A scoring method that compares a candidate’s performance on a pre-employment assessment to a predefined set of criteria or standards.
Anchored rating scales: A type of rating scale used in assessments that provides specific behavioral examples or descriptions for each level of performance, reducing subjectivity and increasing the consistency of ratings among evaluators.
Differential Item Functioning (DIF): A statistical analysis that examines whether different groups of candidates respond differently to individual assessment items, potentially indicating the presence of bias or unfairness in the test.
Utility analysis: A quantitative method used to evaluate the effectiveness of a pre-employment assessment or selection method by estimating its impact on organizational outcomes, such as productivity and cost savings.
Multitrait-multimethod matrix (MTMM): A research design used to evaluate the convergent and discriminant validity of multiple assessment methods measuring multiple constructs or traits.
Response process validity: An aspect of validity that examines whether candidates interpret and respond to assessment items in the way they were intended, providing evidence that the test is measuring the intended construct.
Test accommodations: Modifications made to the administration of a pre-employment assessment to ensure that candidates with disabilities or other special needs have an equal opportunity to demonstrate their abilities.
Test speediness: A characteristic of a pre-employment assessment that requires candidates to complete the test within a specified time limit, potentially influencing their performance and the interpretation of their scores.
Rasch model: A specific type of Item Response Theory (IRT) model used to estimate the difficulty of test items and the ability of candidates based on their responses to those items.
Domain sampling theory: A theoretical framework that underlies the development of criterion-referenced tests, positing that a test should sample a representative set of tasks or content from the domain of interest.
Confidence interval: A range of scores within which a candidate’s true score on a pre-employment assessment is likely to fall, with a specified level of confidence, accounting for the potential measurement error associated with the test.
Classical Test Theory (CTT): A traditional approach to test development and scoring that assumes a candidate’s observed test score is composed of their true score and a random error component.
Forced-choice assessment: A type of pre-employment assessment that presents candidates with multiple statements or options and requires them to choose the one that best describes their preferences or behaviors, reducing the impact of social desirability bias.
Social desirability bias: The tendency for candidates to respond to assessment items in a way that makes them appear more favorable or socially acceptable, potentially distorting their true scores.
Test form equivalence: The degree to which different forms or versions of a pre-employment assessment are interchangeable, providing comparable results and maintaining the same level of difficulty.
Universal design for assessment: A set of principles and guidelines aimed at ensuring that pre-employment assessments are accessible, inclusive, and fair for all candidates, regardless of their background or abilities.