Difference between revisions of "R Regression"
Line 13: | Line 13: | ||
setwd("T:/ECLR/R/FirstSteps") # This sets the working directory | setwd("T:/ECLR/R/FirstSteps") # This sets the working directory | ||
mydata <- read.csv("mroz.csv") # Opens mroz.csv from working directory | mydata <- read.csv("mroz.csv") # Opens mroz.csv from working directory | ||
+ | |||
# Now convert variables with "." to num with NA | # Now convert variables with "." to num with NA | ||
mydata<source enclose=none>$</source>wage <- as.numeric(as.character(mydata<source enclose=none>$</source>wage)) | mydata<source enclose=none>$</source>wage <- as.numeric(as.character(mydata<source enclose=none>$</source>wage)) | ||
Line 31: | Line 32: | ||
comes from. One additional note here. The <source enclose=none>log(huswage)</source> part of this model took the <source enclose=none>huswage</source> variable from our dataframe and applied the <source enclose=none>log()</source> function to it. | comes from. One additional note here. The <source enclose=none>log(huswage)</source> part of this model took the <source enclose=none>huswage</source> variable from our dataframe and applied the <source enclose=none>log()</source> function to it. | ||
− | You should be familiar with the left hand side of the command, which assigns the results of the regression to a new object called <source enclose=none>reg_ex1</source>. You should think of this as some sort of folder in which R has now saved all the regression results. If you look at the object <source enclose=none>reg_ex1</source> in your environment, you will most likely scratch your head and think "What a mess!" and I think you are quite right. | + | You should be familiar with the left hand side of the command, which assigns the results of the regression to a new object called <source enclose=none>reg_ex1</source>. You should think of this as some sort of folder in which R has now saved all the regression results. If you look at the object <source enclose=none>reg_ex1</source> in your environment, you will see that it is a list object with 13 elements. So far so good, but if you click on the little triangle next to its name and see the detail you will most likely scratch your head and think "What a mess!" and I think you are quite right. |
=== Regression Output === | === Regression Output === | ||
Line 37: | Line 38: | ||
You would be familiar with a standard regression output containing estimated coefficients, standard errors and a set of regression statistics<ref>This wiki is not meant to explain the econometrics, but merely the programming implementation.</ref>. Fortunately it is very easy to obtain this. The command is | You would be familiar with a standard regression output containing estimated coefficients, standard errors and a set of regression statistics<ref>This wiki is not meant to explain the econometrics, but merely the programming implementation.</ref>. Fortunately it is very easy to obtain this. The command is | ||
− | summary(reg_ex1) | + | > summary(reg_ex1) |
+ | |||
+ | Call: | ||
+ | lm(formula = lwage ~ exper + log(huswage), data = mydata) | ||
+ | |||
+ | Residuals: | ||
+ | Min 1Q Median 3Q Max | ||
+ | -3.10089 -0.31219 0.02919 0.37466 2.11402 | ||
+ | |||
+ | Coefficients: | ||
+ | Estimate Std. Error t value Pr(>|t|) | ||
+ | (Intercept) 0.534866 0.139082 3.846 0.000139 *** | ||
+ | exper 0.016684 0.004243 3.933 9.81e-05 *** | ||
+ | log(huswage) 0.236466 0.063684 3.713 0.000232 *** | ||
+ | --- | ||
+ | Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 | ||
+ | |||
+ | Residual standard error: 0.7031 on 425 degrees of freedom | ||
+ | (325 observations deleted due to missingness) | ||
+ | Multiple R-squared: 0.05919, Adjusted R-squared: 0.05477 | ||
+ | F-statistic: 13.37 on 2 and 425 DF, p-value: 2.338e-06 | ||
+ | |||
+ | Most things should sound familiar here. One thing you should note is that, on this occasion, R automatically ignored all the missing observations, all 325 of them, which left R to estimate the model with 428 observations<ref>Why does the output not show this number???? Yes, a fair question. If you realise that your dataframe has 753 observations then 753-325 (missing) observations is 428. Alternatively you can infer the number from the reported degrees of freedom which are calculatesd as the number of observations minus the number of estimated coefficients, here 3.</ref> | ||
+ | |||
+ | === Accessing Regression Output === | ||
+ | The next thing you should want to know as a buddying applied econometrician is how you actually access these regression statistics in order to use them in some further analysis. Let's see how you would access the regression residuals. | ||
== Footnotes == | == Footnotes == | ||
<references /> | <references /> |
Revision as of 20:40, 17 January 2015
Let's assume we want to run a regression with lwage
(the logarithm of the woman's wage) as dependent variable and a constant, exper
(the years of experience) and the logarithm of the husbands wage (huswage
as explanatory variables. First we should note that the logarithm of the woman's wage already exists as variable lwage
, but the logarithm of the husband's wage doesn't exist as its own variable. Hence we are yet to calculate it.
Contents
The lm()
function
The R function that does the heavy lifting for regression analysis is the lm()
function (presumably an abbreviation for "linear model") and we will have a close up look at how it works. But let's get our first regression under the belt.
A first example
The following few lines of code (which you should save in a script) import the data, convert missing data to NAs (see R_Data#Data_Types) and eventually runs a regression:
# This is my first R regression! setwd("T:/ECLR/R/FirstSteps") # This sets the working directory mydata <- read.csv("mroz.csv") # Opens mroz.csv from working directory # Now convert variables with "." to num with NA mydata$
wage <- as.numeric(as.character(mydata$
wage)) mydata$
lwage <- as.numeric(as.character(mydata$
lwage)) # Run a regression reg_ex1 <- lm(lwage~exper+log(huswage),data=mydata)
So let's look at the last line in which we ask R to run a regression. Whatever comes in the parenthesis after lm
are parameters to the lm()
function. Different parameters are separated by commas. So here we have two inputs. Let's start with the second data=mydata
. This basically indicates to R that we are drawing the data for the regression from our dataframe called mydata
. That means that for the first input, in which we actually specify the model we estimate, we can refer to the variable names of the variables that are contained in mydata
. In that first input you should imagine writing down a regression model. The model we want to estimate is the following:
[math]\label{OLSModel} lwage = \beta_0 + \beta_1 * exper + \beta_2 * log(huswage) + \epsilon[/math]
The way how you tell lm()
to estimate this model is to leave the coefficients and error term away, and replace the equal sign with a ~
. This is where the bold part of
reg_ex1 <- lm(lwage~exper+log(huswage),data=mydata)
comes from. One additional note here. The log(huswage)
part of this model took the huswage
variable from our dataframe and applied the log()
function to it.
You should be familiar with the left hand side of the command, which assigns the results of the regression to a new object called reg_ex1
. You should think of this as some sort of folder in which R has now saved all the regression results. If you look at the object reg_ex1
in your environment, you will see that it is a list object with 13 elements. So far so good, but if you click on the little triangle next to its name and see the detail you will most likely scratch your head and think "What a mess!" and I think you are quite right.
Regression Output
You would be familiar with a standard regression output containing estimated coefficients, standard errors and a set of regression statistics[1]. Fortunately it is very easy to obtain this. The command is
> summary(reg_ex1) Call: lm(formula = lwage ~ exper + log(huswage), data = mydata) Residuals: Min 1Q Median 3Q Max -3.10089 -0.31219 0.02919 0.37466 2.11402 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.534866 0.139082 3.846 0.000139 *** exper 0.016684 0.004243 3.933 9.81e-05 *** log(huswage) 0.236466 0.063684 3.713 0.000232 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.7031 on 425 degrees of freedom (325 observations deleted due to missingness) Multiple R-squared: 0.05919, Adjusted R-squared: 0.05477 F-statistic: 13.37 on 2 and 425 DF, p-value: 2.338e-06
Most things should sound familiar here. One thing you should note is that, on this occasion, R automatically ignored all the missing observations, all 325 of them, which left R to estimate the model with 428 observations[2]
Accessing Regression Output
The next thing you should want to know as a buddying applied econometrician is how you actually access these regression statistics in order to use them in some further analysis. Let's see how you would access the regression residuals.
Footnotes
- ↑ This wiki is not meant to explain the econometrics, but merely the programming implementation.
- ↑ Why does the output not show this number???? Yes, a fair question. If you realise that your dataframe has 753 observations then 753-325 (missing) observations is 428. Alternatively you can infer the number from the reported degrees of freedom which are calculatesd as the number of observations minus the number of estimated coefficients, here 3.