Posted June 26, 2009
Reliability analysis refers to the fact that a scale should consistently reflect the construct it is measuring. There are certain times and situations where it can be useful.
Statistics Solutions is the country's leader in statistical data analysis and can assist with reliability analysis for your dissertation, thesis or research project. Contact Statistics Solutions today for a free 30-minute consultation.
An aspect in which the researcher can use reliability analysis is when two observations under study that are equivalent to each other in terms of the construct being measured also have the equivalent outcome.
There is a popular technique called the split half reliability. This method splits the data into two parts. The score for each participant in the analysis is then computed on the basis of each half of the scale. In that type of reliability analysis, if the scale is very reliable, then the value of the person’s score on one half of the scale would be equivalent to the score on the other half. In this type of reliability analysis, the previous fact should remain true for all the participants.
The major problem is that there are several ways in which a set of data can be divided into two parts, and therefore the outcome could be numerous.
In order to overcome this problem, Cronbach (1951) introduced a measure that is common in reliability analysis. This measure is loosely equivalent to the splitting of the data in two halves in every possible manner and further computing the correlation coefficient for each split. The average of these values is similar to the value of Cronbach’s alpha.
There are basically two versions of alpha in reliability analysis. The first version is the normal version. The second version is the standardized version.
The normal version of alpha is applicable when the items on a scale are summed to produce a single score for that scale. The standardized version of alpha is applicable when the items on a scale are standardized before they are summed up.
According to Kline (1999), the acceptable value of alpha in reliability analysis is 0.8 in the case of intelligence tests, and the acceptable value of alpha in reliability analysis is 0.7 in the case of ability tests.
There are certain assumptions that are assumed.
While conducting reliability analysis in SPSS, the researcher should click on “Tukey’s test of additivity” as additivity is assumed.
Independence within the observations is assumed. However, it should be noted by the researcher that the test retest type of reliability analysis involves the correlated data between the observations which do not pose a statistical problem in assessing the reliability.
It is assumed that the errors are uncorrelated to each other. This means that no association exists among the errors and therefore all the errors are different.
To attain reliability in the data, the coding done by the researcher should be consistent. This means that the high values must be coded consistently, such that they have the same meaning across the items.
In the split half type of reliability analysis, the random assignment of the subjects is assumed. Generally, the odd numbered items fall in one category and the even numbered items fall in the other.