Selection by Design - Blog
The central focus of psychometric testing is interpreting and sharing test results. Test feedback provides a chance to go beyond numbers and statistics to convey meaning for test takers and client organisations. Test scores may help a student make initial career choices, inform an organisation about where their candidates fall on abilities, or interpret a personality profile for leadership development.
How can you ensure effective, meaningful communication during this process? Feedback might take the form of either an oral discussion or a written report. Oral feedback provides an excellent opportunity for frank and honest examination of how scores might have been affected by anything related to the testing process or conditions. Such a conversation provides a test taker with the chance to compare anticipated with actual performance. Do scores confirm expectations? If not, in what ways? Allowing for reactions and feelings to be talked over may improve impressions about the assessment and the administrator or organisation involved.
Written feedback may follow an interactive discussion, reflecting test taker input as well as test score results. A written feedback report provides a person with a record of their performance, while addressing many of the key areas listed below. It is essential that this type of information be clearly and effectively communicated, at a level consistent with the background of the audience.
Feedback might alternatively be provided to organisations interested in including scores as part of a hiring decision process. This type of report may require more statistical detail than that given to individuals. It still needs to be clearly communicated in language that is professional and jargon-free.
The following are key components of successful feedback, regardless of audience or format. Working through this checklist as you prepare to share test results will help to ensure clients of a positive, interactive experience.
1. Establish rapport.
The goal is to set the stage for a comfortable context in which information can be shared.
2. Provide background on the measure.
This might include answers to questions like: How was the test developed? What is its purpose? Who uses it, and in what ways? How long has it been in use? What was the justification for using this particular test in the present context?
3. Describe the psychometric qualities of the test.
Without being overly technical or using jargon, explain evidence for reliability, validity, fairness and the appropriateness of available norm groups.
4. At what level did the test taker perform?
Communicate performance to individuals without referring to raw scores, complex statistics or judgmental terms. The focus should be on describing performance and what it means in familiar terms the test taker can clearly follow. Test manuals may provide suggestions on how to phrase performance results. For example, they may offer adverbs and adjectives matched to standardised scores. The 16pf manual goes into this in depth. For example: "Your performance suggests that you are highly..." For ability tests you might try phrases such as: "Your score is typical of that for most persons working in this type of job", or "Your performance fell below that required by this organisation for the present job". It is best to avoid using any numbers in explaining performance.
Test scores become meaningful when compared to typical group performance. This is done by using standardised scores. Without a background in statistics or measurement, it is easy to misinterpret the meaning of standardised scores. For example, a score at the 75th percentile does not mean the person got 75% of items correct. Standardised scores mean that very few persons attain the highest levels (or lowest) levels of performance; rather, most scores are “average”. Generally, test takers do not appreciate learning that there performance was at the mid-point of a scale. Importantly: Do not provide clients with raw scores, such as the number they had correct. This information is not meaningful and is likely to be misinterpreted.
5. Ensure the client or test taker has the opportunity for input.
The test taker should be given the opportunity to have any questions about the assessment answered, to discuss their experience with the test, and to offer perspective on their own abilities or attributes. This is important both ethically and as a means to promote the relationship between the test administrator or organisation and the test taker. Because all scores are influenced by a level of error, it is possible that test results do not well reflect a candidate’s underlying attribute. Offering the chance to discuss their experience and offer perspective on their performance provides insight into the level and direction in which error may be operating. This information can be included in written reports. Even computer-generated reports can be amended or appended to include such data.
6. Explain what will happen next.
This includes informing the test taker about any follow up discussions you may schedule with them, or additional assessments they will be asked to complete. In the case of selection decisions, when and how will their results be considered? Who will make final decisions and how will they be communicated? Communicate what will happen with the individual’s data. If it will be retained, provide information on security and confidentiality: How will it be stored, who will have access, and for what length of time? If anyone may wish to use test scores of performance results in the future for any other purposes, permission from the test taker is needed.
Selection by Design’s training in the BPS/EFPA qualifications offers a great opportunity to learn more about how to provide effective psychometric test feedback, while providing a chance for each trainee to gain experience in this essential set of skills. We also offer advanced feedback training and one-to-one consultation on improving your feedback skills. Contact us today to join our courses or to arrange a session.