# Testing for the Independence of Regression Disturbances

The problem to be considered in this paper is that in a linear regression model, $y$ = $X \beta + \varepsilon$ (where $X$ is $n \times k$ of rank $r \leqslant k$), the disturbance vector $\varepsilon^{\prime}$ = ($\varepsilon_1, \varepsilon_2, ..., \varepsilon_n$) is distributed according to the null hypothesis, $H_0$, as multivariate normal with mean vector 0 and variance-covariance matrix proportional to $\varSigma_0$, against the alternative hypothesis, $H_1$, that it is distributed as multivariate normal with mean vector 0 and variance-covariance matrix proportional to $\varSigma_1$. Three test statistics, $s_1, s_2,$ and $s_3$, all functions of estimated disturbances from the fitted regression are proposed to test the hypothesis $H_0$. It is shown (in Section 3) that all three tests based on $s_1, s_2$, and $s_3$ are unbiased and that the test $T$(1) based on $s_1$ is most powerful. In Section 4, $\varepsilon$ is assumed to have a special covariance structure, namely, a first order stationary Markov process, uniform covariance structure, and moving average of order one, and the general results obtained in Section 3 are simplified. It is also shown that the hypothesis $H_0$, in general, cannot be tested and that only an implication of it can be tested. Section 5 contains three numerical illustrations comparing the results proposed in this study with the Durbin-Watson procedure.