Difference between revisions of "R robust se"
(→Heteroskedasticity robust standard errors) |
(→Heteroskedasticity robust standard errors) |
||
Line 33: | Line 33: | ||
coeftest(reg_ex1, vcv) | coeftest(reg_ex1, vcv) | ||
− | if you already calculated <source enclose=none>vcv</source>. If not, you may as well use this line | + | if you already calculated <source enclose=none>vcv</source>. Try it out and you will find the regression coefficients along with their new standard errors, t-stats and p-values. If not, you may as well use this line |
coeftest(reg_ex1, vcov = vcovHC(reg_ex1,type="HC1")) | coeftest(reg_ex1, vcov = vcovHC(reg_ex1,type="HC1")) |
Revision as of 15:22, 13 April 2015
Here we briefly discuss how to estimate robust standard errors for linear regression models
Contents
Which package to use
There are a number of pieces of code available to facilitate this task[1]. Here I recommend to use the "sandwich" package. Which has the most comprehensive robust standard error options I am aware of.
As described in more detail in R_Packages you should install the package the first time you use it on a particular computer:
install.packages("sandwich")
and then call the package at the beginning of your script into the library:
library(sandwich)
All code snippets below assume that you have done so. In fact, you may instead want to use another package called "AER" which contains the sandwich package. See the relevant CRAN webpage
Heteroskedasticity robust standard errors
I assume that you know that the presence of heteroskedastic standard errors renders OLS estimators of linear regression models inefficient (although they remain unbiased). More seriously, however, they also imply that the usual standard errors that are computed for your coefficient estimates (e.g. when you use the summary()
command as discussed in R_Regression), are incorrect (or sometimes we call them biased). This implies that inference based on these standard errors will be incorrect (incorrectly sized). What we need are coefficient estimate standard errors that are correct even when regression error terms are heteroskedastic, sometimes called White standard errors.
Let's assume that you have calculated a regression (as in R_Regression):
# Run a regression
reg_ex1 <- lm(lwage~exper+log(huswage),data=mydata)
The function from the "sandwich" package that you want to use is called vcovHC()
and you use it as follows:
vcv <- vcovHC(reg_ex1, type = "HC1")
This saves the heteroscedastic robust standard error in vcv
[2]. Now you can calculate robust t-tests by using the estimated coefficients and the new standard errors (square roots of the diagonal elements on vcv
). But note that inference using these standard errors is only valid for sufficiently large sample sizes (asymptotically normally distributed t-tests).
You may actually want a neat way to see the standard errors, rather than having to calculate the square roots of the diagonal of this matrix. This is done with the following function:
coeftest(reg_ex1, vcv)
if you already calculated vcv
. Try it out and you will find the regression coefficients along with their new standard errors, t-stats and p-values. If not, you may as well use this line
coeftest(reg_ex1, vcov = vcovHC(reg_ex1,type="HC1"))
which incorporates the call to the vcovHC
function.
Autocorrelation and heteroskedasticity robust standard errors
When the error terms are autocorrelated (and potentially heteroskedastic) all of the above applies and we need to use yet another estimator for the coefficient estimate standard errors, sometimes called the Newey-West estimators.
The function from the "sandwich" package that you want to use is called vcovHAC()
and you use it as follows:
vcv <- vcovHAC(reg_ex1)
Everything is as for heteroskedastic error terms.