Creating and Validating an Instrument

Quantitative Methodology

The researcher can search literature and databases to find a suitable instrument.  If no instruments are available, researchers can follow four phases to develop one that accurately measures the variables (Creswell, 2005).  Those four phases are: planning, construction, quantitative evaluation, and validation.  Each phase includes several steps that researchers must follow to fully satisfy the requirements of that phase.

Planning Phase

The first phase, planning, begins with identifying the test’s purpose and the target group. Specifically, in this step, the researcher identifies the test’s purpose, specifies the content area to study, and determines the target group. Next, the second step involves reviewing the literature for existing instruments. Researchers can check for existing instruments on the ERIC website (www.eric.ed.gov), Mental Measurements Yearbook (Impara & Plake, 1999), and Tests in Print (Murphy, Impara, & Plake, 1999).

Once confirmed, the researcher reviews the literature to determine the operational definitions of the constructs. This task can be challenging, as operationalizing a variable doesn’t guarantee good measurement. The researcher must review multiple sources to ensure an accurate and meaningful construct.  From this information, the researcher should develop open ended questions to present to a sample that is representative of the target group.  The open-ended questions help the researcher identify areas of concern related to the constructs they need to measure. The researcher should use the responses to the open-ended questions and the literature review together to create and modify accurate measures of the constructs.

Construction Phase

The second phase is construction and it begins with identifying the objectives of the instrument and developing a table of specifications.  Those specifications should narrow the purpose and identify the content areas. In the specification process, the researcher should associate each variable with a concept and an overarching theme (Ford, http://www.blaiseusers.org/2007/papers/Z1%20-%20Survey%20Specifications%20Mgmt%20at%20Stats%20Canada.pdf).

Once the researcher completes the table of specification, they can write the items in the instrument. The researcher must choose the format to use, ie. Likert scale, multiple choice, etc.  The researcher should determine the question format based on the type of data to collect. Depending on the project’s financial resources, the researcher may hire experts in the field to write the items.  Once the researcher writes the items, they must review them for clarity, formatting, acceptable response options, and wording. After reviewing the questions several times, the researcher should present them to peers and colleagues in the format used to administer the instrument.

The peers and colleagues should match the items with the specification table, and if there are no exact matches, the researcher must make revisions.  An instrument is content valid when the items adequately reflect the process and content dimensions of the objectives of the instrument (Benson & Clark, 1982).  The researcher should distribute the instrument to a sample that represents the target group.  This time the group should take the survey and critique the quality of the individual items and overall instrument.

Quantitative Evaluation Phase

Phase three is quantitative evaluation and includes administration of a pilot study to a representative sample.  It may be helpful to ask the participants for feedback to allow for further refinement of the instrument.  The pilot study provides quantitative data that the researcher can test for internal consistency by conducting Cronbach’s alphas.  The reliability coefficient can range from 0.00 to 1.00, with values of 0.70 or higher indicating acceptable reliability (George and Mallery, 2003).  If the instrument predicts future behavior, the researcher must administer it to the same sample at two different time periods and correlate the responses to determine concurrent validity.  The researcher can examine these measurements to help make informed decisions about revisions to the instrument.

Validation Phase

Phase four is validation.  In this phase the researcher should conduct a quantitative pilot study and analyze the data.  It may be helpful to ask the participants for feedback to allow for further refinement of the instrument.  The pilot study provides quantitative data that the researcher can test for internal consistency by conducting Cronbach’s alphas.  To establish validity, the researcher must determine which concept of validity is important.  The three types of validity include content, criterion-related, and construct.  Content validity is the extent to which the questions on a survey are representative of the questions that could be asked to assess a particular construct.  To examine content validity, the researcher should consult two to three experts.  Criterion-referenced validity is used when the researcher wants to determine if the scores from an instrument are a good predictor of an expected outcome.

In order to assess this type of validity, the researcher must be able to define the expected outcome.  A correlation coefficient of a .60 or above will indicate a significant, positive relationship (Creswell, 2005).  Construct validity is established by determining if the scores recorded by an instrument are meaningful, significant, useful, and have a purpose.  In order to determine if construct validity has been achieved, the scores need to be assessed statistically and practically.  This can be done by comparing the relationship of a question from the scale to the overall scale, testing a theory to determine if the outcome supports the theory, and by correlating the scores with other similar or dissimilar variables.  The use of similar instruments is referred to as convergent validity and the use of dissimilar instruments is divergent validity.

Resources

Creswell, J. W. (2005). Educational research: Planning, conducting, and evaluating quantitative and qualitative research(2nd ed.). Upper Saddle River, NJ: .Pearson Education, Inc.

George, D. & Mallery, P. (2003). SPSS for Windows step by step: a simple guide and reference, 11.0 update (4thed.). Boston, MA: Allyn and Bacon.

Murphy, L. L., Impara, J. C., & Plake, B. S. (Eds.). (1999)