**Significance** testing refers to the use of statistical techniques that are used to determine whether the sample drawn from a population is actually from the population or if by the chance factor. Usually, statistical significance is determined by the set alpha level, which is conventionally set at .05. Inferential statistics provide the test statistics and significance level of the analysis conducted, and if the associated *p* value (or significance) value falls below the determined alpha level, then the analysis can be said to be statistically significant. For example, regression analysis can be used if we want to draw a conclusion as to whether or not there are any true relationships between a dependent and independent variable(s). Significance is then used to determine whether the relationship exists or not. For example, if the regression coefficient is significant at the .05 level, then it can be said that we can reject the null hypothesis and accept the alternative hypothesis that a relationship exists between the dependent and independent variable(s). In another example, if we take a sample from the population and want to draw conclusions about that sample at the .05 significance level, we must figure out if the sample is representing the characteristic of that population or not. We must do this so that we can use that sample for further analysis. Suppose that sample is insignificant as the .05 significance level. This means that the sample is not representing the characteristics of that population, or at least not statistically so. . Significance is used for the following inferential analyses:

**Parametric tests:** The Parametric test makes assumptions about distribution, particularly about normal distribution. When the parametric test meets assumptions, then the parametric test is more powerful than the non-parametric test. The following are common parametric tests:

- Linear regression analyses
- T-tests and analyses of variance on the difference of means
- Normal curve Z-tests of the differences of means and proportions

Aligning theoretical framework, gathering articles, synthesizing gaps, articulating a clear methodology and data plan, and writing about the theoretical and practical implications of your research are part of our comprehensive dissertation editing services.

- Bring dissertation editing expertise to chapters 1-5 in timely manner.
- Track all changes, then work with you to bring about scholarly writing.
- Ongoing support to address committee feedback, reducing revisions.

**Non-parametric test:** These tests do not assume any assumption about distribution. The following are common non-parametric tests:

Chi-square tests, Fisherâ€™s exact test, runs test, One-Sample Kolmogorov-Smirnov test, Mann-Whitney U test, Wald-Wolfowitz runs, Kruskal-Wallis, Jonckheere-Terpstra tests, McNemar, Wilcoxon sign tests, Friedman, Kendallâ€™s W, and Cochran Q tests.

**Key concepts and terms:**

**Significance and Type I error:** Significance shows that relationship in the data is found due to the chance factor. When we reject the null hypothesis which is true, or which should be accepted and we reject that hypothesis, this is called Type I error.

**Confidence limits:** Confidence limit (or interval) is basically upper and lower bound limits of significance on a normal curve. For a specified hypothesis, we assume that the significance range of hypothesis will move between this confidence range. If the calculated value moves within this range, than we can say that the hypothesis is insignificant. If the range moves outside this, then the hypothesis will be significant and rejected. For normally distributed data, confidence limits for a true population will always move with the mean, plus or minus 1.96 times the standard error.

**Power or Type II errors:** When we accept a false hypothesis, it is called Type II error. It also happens when we think that the relationship exists but there is no relationship. One minus beta (or Type II errors) is called power. Type II errors are more dangerous than Type I errors.

**One-tailed vs. two-tailed tests:** When we make assumptions about the hypothesis, that the hypothesis will be less than or greater than (or just assessing for differences), it is said to be a two-tailed test. When we assume that the hypothesis is equal to some parameter (strictly less than or strictly greater than), then it is said to be a one-tailed test.

**Asymptotic vs. exact vs. Monte Carlo significance:** Most significance tests are asymptotic which assume that sample size is adequate. When sample size is very small, then we use an exact test. An exact test is available in SPSS add on module. The Monte carlo test is used when the sample size is large.