If an instrument requires some professional judgments in scoring, then the manual should also include information on

a. Interrater RELIABILITY
b. Corrections of the reliability coefficients using the Spearman-Brown formula
c, Both KR 20s and KR 21s
d. Test retest reliability coefficients

a. Interrater RELIABILITY

When an instrument requires professional judgments in scoring, the corresponding manual should include information on various aspects of reliability. These aspects help assess the consistency and accuracy of the instrument's measurements. Some of the information that should be included in the manual are:

a. Interrater Reliability: This refers to the degree of agreement or consistency between different raters or scorers who assess or score the instrument. The manual should provide guidelines and procedures to ensure interrater reliability, such as training scorers and establishing scoring criteria.

b. Corrections of the Reliability Coefficients using the Spearman-Brown formula: The reliability coefficients, such as Cronbach's alpha, estimate the internal consistency of the instrument. These coefficients can be corrected using the Spearman-Brown formula, which estimates how reliable the instrument would be if its length were adjusted. The manual should explain how to calculate and interpret these corrections.

c. Both KR 20s and KR 21s: KR 20 and KR 21 are formulas used to estimate the internal consistency of tests with dichotomous items (e.g., true/false or yes/no questions). Including information on both KR 20 and KR 21 can provide a comprehensive assessment of internal consistency and help determine the reliability of the instrument.

d. Test-Retest Reliability Coefficients: This type of reliability assesses the instrument's stability over time by measuring the correlation between scores obtained by the same individuals on two separate occasions. The manual should include information on how to calculate and interpret test-retest reliability coefficients, as well as suggestions for appropriate time intervals between test administrations.

Including information on these reliability measures in the manual enhances the instrument's transparency and allows users to evaluate and understand the quality of the instrument's scoring.

To understand what information should be included in the manual of an instrument that requires professional judgments in scoring, let's break down the options provided:

a. Interrater RELIABILITY: This refers to the consistency or agreement among different raters or scorers. To determine interrater reliability, multiple scorers independently rate the same responses or performances, and their scores are then compared. To include information on interrater reliability in the manual, the author would need to describe the method used to establish interrater reliability, such as by providing guidelines or examples for raters to follow.

b. Corrections of the reliability coefficients using the Spearman-Brown formula: The Spearman-Brown formula is used to estimate the reliability of a test or instrument if the length of the test is increased or decreased. This option suggests including information on how to use the formula to correct or estimate the reliability coefficients of the instrument. This might be relevant if the length of the test is modified or if there are concerns about the instrument's overall reliability.

c. Both KR 20s and KR 21s: KR 20 and KR 21 are formulas used to calculate the internal consistency reliability of a test or instrument. They measure the extent to which the items in a test are consistently measuring the same construct. Including information on both KR 20 and KR 21 in the manual would involve explaining how to calculate these coefficients and interpret their values, in order to assess the internal consistency reliability of the instrument.

d. Test retest reliability coefficients: Test-retest reliability assesses the stability or consistency of test scores over time. It measures the degree to which scores on the same test, administered to the same individuals on two different occasions, correlate with each other. Including information on test-retest reliability coefficients would involve describing the procedure used to calculate these coefficients and any relevant guidelines for interpreting the results.

In summary, if an instrument requires professional judgments in scoring, the manual should likely include information on interrater reliability, as well as any corrections of reliability coefficients, such as through the Spearman-Brown formula. Additionally, it would be appropriate to include information on internal consistency reliability coefficients (such as KR 20 and KR 21) and test-retest reliability coefficients, if relevant to the instrument's purpose.