Posted June 26, 2009
In Reliability analysis, the word reliability refers to the fact that a scale should consistently reflect the construct it is measuring. There are certain times and situations where Reliability analysis can be useful.
Statistics Solutions is the country's leader in statistical data analysis and can assist with reliability analysis for your dissertation, thesis or research project. Contact Statistics Solutions today for a free 30-minute consultation.
An aspect in which the researcher can think about Reliability analysis is when two observations under study that are equivalent to each other in terms of the construct being measured also have the equivalent outcome.
There is a popular technique of Reliability analysis called the split half reliability. This method of Reliability analysis splits the data into two parts. The score for each participant in the Reliability analysis is then computed on the basis of each half of the scale. In that type of Reliability analysis, if the scale is very reliable, then the value of the person’s score on one half of the scale would be equivalent to the score on the other half. In this type of Reliability analysis, the previous fact should remain true for all the participants.
The major problem with this type of Reliability analysis is that there are several ways in which a set of data can be divided into two parts, and therefore the outcome could be numerous.
In order to overcome this problem in this type of Reliability analysis, Cronbach (1951) introduced a measure that is a common measure in Reliability analysis. This measure of Reliability analysis is loosely equivalent to the splitting of the data in two halves in every possible manner and further computing the correlation coefficient for each split. The average of these values is similar to the value of Cronbach’s alpha in Reliability analysis.
There are basically two versions of alpha in Reliability analysis. The first version of alpha in Reliability analysis is the normal version. The second version of alpha in Reliability analysis is the standardized version.
The normal version of alpha in Reliability analysis is applicable when the items on a scale are summed to produce a single score for that scale. The standardized version of alpha in Reliability analysis is applicable when the items on a scale are standardized before they are summed up.
According to Kline (1999), the acceptable value of alpha in Reliability analysis is 0.8 in the case of intelligence tests, and the acceptable value of alpha in Reliability analysis is 0.7 in the case of ability tests.
There are certain assumptions that are assumed in Reliability analysis.
While conducting Reliability analysis in SPSS, the researcher should click on “Tukey’s test of additivity” as additivity is assumed in Reliability analysis.
In Reliability analysis, independence within the observations is assumed. However, it should be noted by the researcher that the test retest type of Reliability analysis involves the correlated data between the observations which do not pose a statistical problem in assessing the reliability in Reliability analysis.
In Reliability analysis, it is assumed that the errors are uncorrelated to each other. This means that in Reliability analysis, there exists no association among the errors and therefore all the errors in Reliability analysis are different.
In Reliability analysis, to attain reliability in the data, the coding done by the researcher should be consistent. This means that in Reliability analysis, the high values must be coded consistently, such that they have the same meaning across the items.
In the split half type of Reliability analysis, the random assignment of the subjects is assumed. Generally, in this type of Reliability analysis, the odd numbered items fall in one category and the even numbered items fall in the other.