Time Series Analysis

Time series analysis is a statistical technique that deals with time series data, or trend analysis.  Time series data means that data is in a series of  particular time periods or intervals.  The data is considered in three types:

Time series data: A set of observations on the values that a variable takes at different times.

Cross-sectional data: Data of one or more variables, collected at the same point in time.

Pooled data: A combination of time series data and cross-sectional data.


Terms and concepts:

Dependence: Dependence refers to the association of two observations with the same variable, at prior time points.

Stationarity: Shows the mean value of the series that remains constant over a time period; if past effects accumulate and the values increase toward infinity, then stationarity is not met.

Differencing: Used to make the series stationary, to De-trend, and to control the auto-correlations; however, some time series analyses do not require differencing and over-differenced series can produce inaccurate estimates.

Specification: May involve the testing of the linear or non-linear relationships of dependent variables by using models such as ARIMA, ARCH, GARCH, VAR, Co-integration, etc.

Exponential smoothing in time series analysis: This method predicts the one next period value based on the past and current value.  It involves averaging of data such that the nonsystematic components of each individual case or observation cancel out each other.  The exponential smoothing method is used to predict the short term predication.  Alpha, Gamma, Phi, and Delta are the parameters that estimate the effect of the time series data.  Alpha is used when seasonality is not present in data.  Gamma is used when a series has a trend in data.  Delta is used when seasonality cycles are present in data.  A model is applied according to the pattern of the data.  Curve fitting in time series analysis: Curve fitting regression is used when data is in a non-linear relationship. The following equation shows the non-linear behavior:

Dependent variable, where case is the sequential case number.

Curve fitting can be performed by selecting “regression” from the analysis menu and then selecting “curve estimation” from the regression option. Then select “wanted curve linear,” “power,” “quadratic,” “cubic,” “inverse,” “logistic,” “exponential,” or “other.”


ARIMA stands for autoregressive integrated moving average.  This method is also known as the Box-Jenkins method.

Identification of ARIMA parameters:

Autoregressive component: AR stands for autoregressive.  Autoregressive paratmeter is denoted by p.  When p =0, it means that there is no auto-correlation in the series.  When p=1, it means that the series auto-correlation is till one lag.

Integrated: In ARIMA time series analysis, integrated is denoted by d.  Integration is the inverse of differencing.  When d=0, it means the series is stationary and we do not need to take the difference of it.  When d=1, it means that the series is not stationary and to make it stationary, we need to take the first difference.  When d=2, it means that the series has been differenced twice.  Usually, more than two time difference is not reliable.

Moving average component: MA stands for moving the average, which is denoted by q.  In ARIMA, moving average q=1 means that it is an error term and there is auto-correlation with one lag.

In order to test whether or not the series and their error term is auto correlated, we usually use W-D test, ACF, and PACF.

Decomposition: Refers to separating a time series into trend, seasonal effects, and remaining variabilityAssumptions:

Stationarity: The first assumption is that the series are stationary.  Essentially, this means that the series are normally distributed and the mean and variance are constant over a long time period.

Uncorrelated random error: We assume that the error term is randomly distributed and the mean and variance are constant over a time period.  The Durbin-Watson test is the standard test for correlated errors.

No outliers: We assume that there is no outlier in the series.  Outliers may affect conclusions strongly and can be misleading.

Random shocks (a random error component): If shocks are present, they are assumed to be randomly distributed with a mean of 0 and a constant variance.

Statistics Solutions can assist with your quantitative analysis by assisting you to develop your methodology and results chapters. The services that we offer include:

Data Analysis Plan

  • Edit your research questions and null/alternative hypotheses
  • Write your data analysis plan; specify specific statistics to address the research questions, the assumptions of the statistics, and justify why they are the appropriate statistics; provide references
  • Justify your sample size/power analysis, provide references
  • Explain your data analysis plan to you so you are comfortable and confident
  • Two hours of additional support with your statistician

Quantitative Results Section (Descriptive Statistics, Bivariate and Multivariate Analyses, Structural Equation Modeling, Path analysis, HLM, Cluster Analysis)

  • Clean and code dataset
  • Conduct descriptive statistics (i.e., mean, standard deviation, frequency and percent, as appropriate)
  • Conduct analyses to examine each of your research questions
  • Write-up results
  • Provide APA 6th edition tables and figures
  • Explain chapter 4 findings
  • Ongoing support for entire results chapter statistics

*Please call 877-437-8622 to request a quote based on the specifics of your research, or email Info@StatisticsSolutions.com.


Brockwell, P. J., & Davis, R. A. (1991). Time Series: Theory and Methods (2nd ed.). New York: Springer-Verlag. View

Cromwell, J. B., Hannan, M. J., Labys, W. C., & Terraza, M. (1994). Multivariate tests for time series models. Thousand Oaks, CA: Sage Publications. View

Cromwell, J. B., Labys, W. C., & Terraza, M. (1994). Univariate tests for time series models. Thousand Oaks, CA: Sage Publications. View

Crosbie, J., & Sharpley, C. F. (1989). DMITSA: A simplified interrupted time-series analysis program. Behavior Research Methods, Instruments & Computers, 21(6), 639-642.

Gallistel, C. R. (1992). Classical conditioning as a nonstationary, multivariate time series analysis: A spreadsheet model. Behavior Research Methods, Instruments & Computers, 24(2), 340-351.

Hamaker, E. L., Dolan, C. V., & Molenaar, P. C. M. (2005). Statistical modeling of the individual: Rationale and application of multivariate stationary time series analysis. Multivariate Behavioral Research, 40(2), 207-233.

Hamilton, J. D. (1994). Time Series Analysis. Princeton, NJ: Princeton University Press. View

McDowall, D., McCleary, R., Meidinger, E. E., & Hay, R. A., Jr. (1980). Interrupted time series analysis. Thousand Oaks, CA: Sage Publications. View

Ostrom, C. W., Jr. (1990). Time series analysis: Regression techniques (2nd ed.). Thousand Oaks, CA: Sage Publications. View

Sayrs, L. W. (1989). Pooled time series analysis. Newbury Park, CA: Sage Publications. View

Strahan, R. (1973). A generalized directional coefficient for multiple time-series analysis. Multivariate Behavioral Research, 8(1), 109-116.

Velicer, W. F., & Fava, J. L. (2003). Time series analysis. In J. A. Schinka & W. F. Velicer (Eds.), Handbook of psychology: Research methods in psychology (pp. 581-606). Hoboken, NJ: John Wiley & Sons.

Yanovitzky, I., & VanLear, A. (2008). Time series analysis: Traditional and contemporary approaches. In A. F. Hayes, M. D. Slater, & L. B. Snyder (Eds.), The SAGE Sourcebook of Advanced Data Analysis Methods for Communications Research (pp. 89-124). Thousand Oaks, CA: Sage Publications. View