Appendix C — Regressions

C.1 OLS estimates

The base model is:

\mathbf y = \mathbf X \boldsymbol \beta + \mathbf u \tag{C.1}

The OLS estimator of \, \boldsymbol \beta is:

\hat{\boldsymbol{\beta}} \ = \ (\mathbf{X}' \mathbf{X})^{-1} \mathbf{X}' \mathbf{y} \tag{C.2}

  • Considering Equation C.1 we have an important alternative formula for \hat{\boldsymbol{\beta}}:

\hat{\boldsymbol{\beta}} \ = \ \boldsymbol{\beta}+\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}' \mathbf{u} \tag{C.3}

  • respectively

(\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}) \ = \ \left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}' \mathbf{u} \tag{C.4}

The estimator for the variance of the error term \sigma^2 is:

\hat \sigma^2 \ = \ \left[ \frac{1}{n-k-1} \right] \mathbf {\hat u}' \mathbf {\hat u} \tag{C.5}


C.2 Covariance matrix in general

The definition of the covariance matrix of a random vector \, \mathbf u is:

\operatorname {Var} (\mathbf u) \ := \ E\left[ \left( \mathbf{u}-E(\mathbf{u} ) \right) \left( \mathbf{u}-E(\mathbf{u} ) \right)^{\prime} \right] \tag{C.6}

Theorem C.1 (Covariance matrix of a linear combination) Assume, the covariance matrix of the random vector \mathbf u \, is \,\operatorname {Var} (\mathbf u) = \mathbf \Sigma.

Then the covariance matrix of the linear combination vector \mathbf v = \mathbf A \mathbf u + \mathbf b is (matrix \, \mathbf A and vector \mathbf b are non-stochastic):

\operatorname {Var} (\mathbf v) = \mathbf A \mathbf \Sigma \mathbf A' \tag{C.7}


C.2.1 OLS covariance matrix

  • Assuming a general covariance matrix of \mathbf u to be \sigma^2 \mathbf \Omega

    • then the covariance matrix of \boldsymbol {\hat \beta} is:

\operatorname {Var}(\boldsymbol {\hat \beta}) = \sigma^2 (\mathbf X' \mathbf X)^{-1} \mathbf X' \, \mathbf \Omega\mathbf X \,(\mathbf X' \mathbf X)^{-1} \tag{C.8}

  • Assuming an iid. error term, i.e., the covariance matrix of \mathbf u is \sigma^2 \mathbf I

    • then the covariance matrix of \boldsymbol {\hat \beta} is:

\operatorname {Var}(\boldsymbol {\hat \beta}) = \sigma^2 (\mathbf X' \mathbf X)^{-1} \tag{C.9}


C.3 Projection matrices

The Orthogonal projection matrix or residual maker matrix is:

\mathbf M_{\mathbf X} := \ ( \mathbf I - \underbrace{\mathbf X(\mathbf X' \mathbf X)^{-1}\mathbf X'}_{:= \ \mathbf P_{\mathbf X}}) = (\mathbf I - \mathbf P_{\mathbf X}) \tag{C.10}

Multiplying \mathbf M_{\mathbf X} with an appropriately dimensioned arbitrary vector \mathbf z yields the vector of residuals \mathbf{\hat u} of a regression of \, \mathbf z on \mathbf X.

\mathbf P_{\mathbf X} := \ \mathbf X(\mathbf X' \mathbf X)^{-1}\mathbf X' \tag{C.11}

This the projection matrix or hat matrix.

Multiplying \mathbf P_{\mathbf X} with an appropriately dimensioned arbitrary vector \mathbf z yields the vector of predicted values \mathbf{\hat z} of a regression of \, \mathbf z on \mathbf X.

Both \mathbf P and \mathbf M are symmetric, idempotent and orthogonal.