Psychometric tests offer important advantages in assessment. The following qualities give a test user or administrator some assurance that a psychometric test will measure consistently, accurately and fairly:
- Evidence for the statistical qualities of validity and reliability will have been examined before test publication.
- It will be possible to see if test questions reflect important skill or attributes.
- Questions of how well items fit together as a scale, and whether they measure consistently, will have already been considered.
- Because psychometric tests are standardized, you can have confidence that testing and scoring procedures have been carefully designed and examined. Scores on psychometric tests are interpreted according to how others have performed in the past.
But what happens when examiners need to create their own assessment instruments? This is often the case for lecturers or teachers working in a classroom context.
Assessments that are examiner-designed typically assess particular topic areas, and are likely to be used with small student groups. Because they are limited in applicability, the resources needed to evaluate their psychometric qualities may be lacking. For instance, large numbers of prior test scores are needed to form comparative norm groups. Qualities like reliability and validity also require sufficient data, not to mention the statistical know-how, software, and time needed to run analyses.
There are steps anyone who develops their own assessments can take to promote effective measurement.
We suggest the following:
1. Communicate clearly when wording questions and possible responses. For example, avoid complex and double-barreled statements. Towards this, it is very helpful to have someone else review your material. A moderator who will review the assessment prior to use may effectively play this role.
2. Take time to carefully review the content of your questions. Do your items reflect all parts of content that is important, or are they just focused on a few areas? This will help to improve the validity of your assessment.
3. Avoid items that contain language or vocabulary that is unfamiliar to some students. This will help to improve fairness in assessment. Once again, an assessment moderator may be helpful.
4. Develop a scoring key when constructing your questions. This is particularly important if you include open-ended or essay-style questions. How will points be awarded or deducted? This information should be communicated to students to help them prepare for assessment. It will help to increase fairness and consistency in scoring.
5. Develop formal instructions to be presented to students – both orally and in writing. This can also increase fairness and consistency during testing, helping to eliminate misinterpretation.
6. Use caution when interpreting test scores that are very close together. Consider where meaningful differences lie. Does a point or two matter, especially around the major cut-off lines (such as pass/fail)? Have you considered that some error may affect scoring, or question interpretation? According to Classic Test Theory (eg, Crocker & Algina, 1986), test scores reflect not only a person’s true score, but some amount of error. Testing is a less than perfect process!
7. Keep records of question performance. Not student performance in this case, but rather how your items did in differentiating among those who took the test. Items that are passed or missed by everyone do not tell you much about performance. The best questions will be those that separate high and low performers.
Please get in touch if you would like to learn more about how to improve assessments. We cover issues surrounding effective testing extensively in Selection by Design’s Test User: Occupational, Ability training course.
Crocker, L., & Algina, J. (1986). Introduction to classical and modern test theory. New York: Holt, Rinehart and Winston, Inc.