On this page you’ll learn about the four data levels of measurement (nominal, ordinal, interval, and ratio) and why they are important. Let’s deal with the importance part first.

Knowing the level of measurement of your variables is important for two reasons. Each of the levels of measurement provides a different level of detail. Nominal provides the least amount of detail, ordinal provides the next highest amount of detail, and interval and ratio provide the most amount of detail.

In a **nominal** level variable, values are grouped into categories that have no meaningful order. For example, gender and political affiliation are nominal level variables. Members in the group are assigned a label in that group and there is no hierarchy. Typical descriptive statistics associated with nominal data are frequencies and percentages.

**Ordinal** level variables are nominal level variables with a meaningful order. For example, horse race winners can be assigned labels of first, second, third, fourth, etc. and these labels have an ordered relationship among them (i.e., first is higher than second, second is higher than third, and so on). As with nominal level variables, ordinal level variables are typically described with frequencies and percentages.

**Interval** and ratio level variables (also called continuous level variables) have the most detail associated with them. Mathematical operations such as addition, subtraction, multiplication, and division can be accurately applied to the values of these variables. An example variable would be the amount of milk used in cookie recipe (measured in cups). This variable has arithmetic properties such that 2 cups of milk is exactly twice as much as 1 cup of milk. Additionally, the difference between 1 and 2 cups of milk is exactly the same as the difference between 2 and 3 cups of milk. Interval and ratio level variables are typically described using means and standard deviations.

The second reason levels of measurement are important to know is because different statistical tests are appropriate for variables with different levels of measurement. For example, chi-square tests of independence are most appropriate for nominal level data. The Mann-Whitney U test is most appropriate for an ordinal level dependent variable and a nominal level independent variable. An ANOVA is most appropriate for a continuous level dependent variable and a nominal level independent variable. To learn which tests use what types of variable, please download the free whitepaper.

Schedule a time to speak with an expert using the calendar below.

A nominal variable is one in which values serve only as labels, even if those values are numbers. For example, if we want to categorize male and female respondents, we could use a number of 1 for male, and 2 for female. However, the values of 1 and 2 in this case do not represent any meaningful order or carry any mathematical meaning. They are simply used as labels. Nominal data cannot be used to perform many statistical computations, such as mean and standard deviation, because such statistics do not have any meaning when used with nominal variables.

However, nominal variables can be used to do cross tabulations. The chi-square test can be performed on a cross-tabulation of nominal data.

Values of ordinal variables have a meaningful order to them. For example, education level (with possible values of high school, undergraduate degree, and graduate degree) would be an ordinal variable. There is a definitive order to the categories (i.e., graduate is higher than undergraduate, and undergraduate is higher than high school), but we cannot make any other arithmetic assumptions beyond that. For instance, we cannot assume that the difference in education level between undergraduate and high school is the same as the difference between graduate and undergraduate.

We can use frequencies, percentages, and certain non-parametric statistics with ordinal data. However, means, standard deviations, and parametric statistical tests are generally not appropriate to use with ordinal data.

For interval variables, we can make arithmetic assumptions about the degree of difference between values. An example of an interval variable would be temperature. We can correctly assume that the difference between 70 and 80 degrees is the same as the difference between 80 and 90 degrees. However, the mathematical operations of multiplication and division do not apply to interval variables. For instance, we cannot accurately say that 100 degrees is twice as hot as 50 degrees. Additionally, interval variables often do not have a meaningful zero-point. For example, a temperature of zero degrees (on Celsius and Fahrenheit scales) does not mean a complete absence of heat.

Some researchers treat variables measured with Likert scales (e.g., with labels such as 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, and 5 = strongly agree) as interval variables. However, treating Likert scale responses as interval data carries the assumption that the differences between points on the scale are all equal. That is to say, using the 5-point Likert scale as an interval scale assumes that the difference between strongly agree and agree is the same relative difference as between neutral and agree. This is often not a safe assumption to make, so Likert scale responses are usually better off treated as ordinal.

An interval variable can be used to compute commonly used statistical measures such as the average (mean), standard deviation, and the Pearson correlation coefficient. Many other advanced statistical tests and techniques also require interval or ratio data.

All arithmetic operations are possible on a ratio variable. An example of a ratio variable would be weight (e.g., in pounds). We can accurately say that 20 pounds is twice as heavy as 10 pounds. Additionally, ratio variables have a meaningful zero-point (e.g., exactly 0 pounds means the object has no weight). Other examples of ratio variables include gross sales of a company, the expenditure of a company, the income of a company, etc.

A ratio variable can be used as a dependent variable for most parametric statistical tests such as* t*-tests, *F*-tests, correlation, and regression.