13. MODULE OBJECTIVE
This module attempts to explore the possibilities of correlation in cross sectional units of error
The estimated model by the application of OLS, discussed earlier, is based on the assumption
that there should not be any relationship among the error regressors. That is covariance between
two errors variables should equal to zero [i.e. Cov (Ui, Uj) = 0 for i ≠ j]. If this assumption is
violated, then there is chance of autocorrelation. It is otherwise called as serial correlation. So,
serial correlation occurs when the error in estimated econometric models are correlated.
In this module, we deal with the followings:
1. WHAT IS AUTOCORRELATION AND HOW IS ITS NATURE?
2. WHAT ARE ITS CONSEQUENCES?
3. DOES IT REALLY A PROBLEM?
4. DETECTION CRITERIA
5. CAUSES OF AUTOCORRELATION
6. REMEDIAL MEASURES
WHAT IS AUTOCORRELATION?
CONSEQUENCES OF AUTOCORRELATION
The estimation process requires that OLS applications of estimated parameters should follow the
BLUE theorem. If not, there is question on model reliability. In specific, the presence of
autocorrelation makes the estimated parameters highly volatile and their standard errors are
infinite. However, it will not affect the unbiasedness property; but affects minimum variance
DOES IT REALLY PROBLEM?
On the first instance, any estimated parameters whose value does not follow BLUE theorem
means it is really a problem. However, in the case of autocorrelation, it depends upon the
objective specification. If the objective is for prediction (or forecasting), then the existence of
autocorrelation (not in severe) is not a serious problem. But if the objective is model reliability,
then it is serious issue, even if it is at the minor level. So we assume that the disturbance term is
generated by a slightly different method and such error terms are also called the white noise error
terms. If it is so, then there is no issue of serial correlation and estimated model can be used for
The detection of autocorrelation can be done only after the estimation process. So, first we
should have estimated model and then we can have the error term. Once we get the error term,
the process of detecting autocorrelation is feasible. The residuals in case of autocorrelation can
be calculated by plotting them in the time sequence plot or alternatively we can plot the
standardized residuals against time. Apart from these there are several quantitative tests that one
can apply in order to supplement the pure qualitative approach. These are as follows:
Ö RUNS TEST
Ö DURBIN WATSON ‘D’ TEST
Ö BREUSCH- GODFREY TEST
Ö VON-NEUMAN RATIO TEST
Among them, the most frequent used criteria to detect autocorrelation is
Here in case of detection of autocorrelation the most frequently used test is the Durbin
Watson‘d’ test. This can be analyzed as follows:
d = cov (ut, ut-1)/ var (ut)
With some simplification, we can have d = 2 (1-ρ)
If ρ = 0, d = 2 and the system has no autocorrelation;
If ρ = -1, d = 4 and the system has perfect negative autocorrelation;
If ρ = 1, d = 0 and the system has perfect positive autocorrelation.
So, d varies from 0 - 4. But the value of d = 2 is the best for the estimated model.
The test, however, depends upon the following assumptions:
The errors follow the autoregressive model
There are no lagged dependent variables used as explanatory variables
There is an intercept in the original model
CAUSES OF AUTOCORRELATION
Ö Interpolation or extrapolation
Ö Misspecification of the random term
Ö An over determined model
Ö An under-determined model
Ö Lag explanatory variables
Ö Wrong data transformation
Ö Manipulation of data
Ö Presence of lagged variable in the system