General

Normality

The normality assumption is one of the most misunderstood in all of statistics.  In multiple regression, the assumption requiring a normal distribution applies only to the disturbance term, not to the independent variables as is often believed.  Perhaps the confusion about this assumption derives from difficulty understanding what this disturbance term refers to – simply

» Read More

Nominal Variable Association

Nominal variable association refers to the statistical relationship(s) on nominal variables.  Nominal variables are variables that are measured at the nominal level, and have no inherent ranking.  Examples of nominal variables that are commonly assessed in social science studies include gender, race, religious affiliation, and college major.  Crosstabulation (also known as contingency or bivariate tables)

» Read More

Conduct and Interpret a Sequential One-Way Discriminant Analysis

What is the Sequential One-Way Discriminant Analysis? Sequential one-way discriminant analysis is similar to the one-way discriminant analysis.  Discriminant analysis predicts group membership by fitting a linear regression line through the scatter plot.  In the case of more than two independent variables it fits a plane through the scatter cloud thus separating all observations in

» Read More

Conduct and Interpret a Profile Analysis

What is the Profile Analysis? Profile Analysis is mainly concerned with test scores, more specifically with profiles of test scores.  Why is that relevant? Tests are commonly administered in medicine, psychology, and education studies to rank participants of a study.  A profile shows differences in scores on the test.  If a psychologist administers a personality

» Read More

Time Series Analysis

Time series analysis is a statistical technique that deals with time series data, or trend analysis.  Time series data means that data is in a series of  particular time periods or intervals.  The data is considered in three types: Time series data: A set of observations on the values that a variable takes at different

» Read More

Testing of Assumptions

In statistical analysis, all parametric tests assume some certain characteristic about the data, also known as assumptions.  Violation of these assumptions changes the conclusion of the research and interpretation of the results. Therefore all research, whether for a journal article, thesis, or dissertation, must follow these assumptions for accurate interpretation  Depending on the parametric analysis,

» Read More

Significance

Significance testing refers to the use of statistical techniques that are used to determine whether the sample drawn from a population is actually from the population or if by the chance factor.  Usually, statistical significance is determined by the set alpha level, which is conventionally set at .05.  Inferential statistics provide the test statistics and

» Read More

Run Test of Randomness

Running a Test of Randomness is a non-parametric method that is used in cases when the parametric test is not in use.  In this test, two different random samples from different populations with different continuous cumulative distribution functions are obtained.Running a test for randomness is carried out in a random model in which the observations

» Read More

Reliability Analysis

Reliability refers to the extent to which a scale produces consistent results, if the measurements are repeated a number of times.  The analysis on reliability is called reliability analysis. Reliability analysis is determined by obtaining the proportion of systematic variation in a scale, which can be done by determining the association between the scores obtained

» Read More

Probability

The origin of the probability theory starts from the study of games like cards, tossing coins, dice, etc.  But in modern times, probability has great importance in decision making.  According to the classical theory, probability is the ratio of the favorable case to the total number of equally likely cases.  Empirical or relative frequency probability

» Read More

Meta Analysis

Meta analysis is a statistical analysis that consists of huge collections of outcomes for the purpose of integrating the findings.  The idea behind conducting Meta analysis is to help the researcher by providing certain methodological literature that the researcher wants to obtain from the experimental research.  Measures of effect size are gathered from existing, previously

» Read More

Mathematical Expectation

Mathematical expectation, also known as the expected value, is the summation or integration of a possible values from a random variable.  It is also known as the product of the probability of an event occurring, denoted P(x), and the value corresponding with the actual observed occurrence of the event.  The expected value is a useful

» Read More

Latent Class Analysis

Latent Class Analysis (LCA) is a statistical technique that is used in factor, cluster, and regression techniques; it is a subset of structural equation modeling (SEM).  LCA is a technique where constructs are identified and created from unobserved, or latent, subgroups, which are usually based on individual responses from multivariate categorical data.  These constructs are

» Read More

Hypothesis Testing

Hypothesis testing was introduced by Ronald Fisher, Jerzy Neyman, Karl Pearson and Pearson’s son, Egon Pearson.   Hypothesis testing is a statistical method that is used in making statistical decisions using experimental data.  Hypothesis Testing is basically an assumption that we make about the population parameter. Key terms and concepts: Null hypothesis: Null hypothesis is

» Read More

Hierarchical Linear Modeling (HLM)

Hierarchical linear modeling (HLM) is an ordinary least square (OLS) regression-based analysis that takes the hierarchical structure of the data into account.  Hierarchically structured data is nested data where groups of units are clustered together in an organized fashion, such as students within classrooms within schools.  The nested structure of the data violates the independence

» Read More

Survival Analysis

Survival analysis helps the researcher assess if, any why, certain individuals are exposed to a higher risk of experiencing an event of interest, such as death, machine failure, drug relapse, etc.  This is also referred to as event history analysis. Survival analysis consists of a wide variety of techniques that help the researcher analyze time-to-event

» Read More

Effect Size

Effect size is a statistical concept that measures the strength of the relationship between two variables on a numeric scale.  For instance, if we have data on the height of men and women and we notice that, on average, men are taller than women, the difference between the height of men and the height of

» Read More

Data Levels and Measurement

All research needs particular data levels and measurement.  There are many procedures in statistics which need different types of data levels, and all data contain assumptions for particular procedures. Broadly, four types of data levels and measurement are used in every type of research: Nominal Ordinal Interval Ratio Nominal data: A nominal scale is one

» Read More

Baron & Kenny’s Procedures for Mediational Hypotheses

Mediational hypotheses are the kind of hypotheses in which it is assumed that the affect of an independent variable on a dependent variable is mediated by the process of a mediating variable and the independent variable may still affect the independent variable. In other words, in mediational hypothesis, the mediator variable is the intervening or

» Read More