Difference between revisions of "R Analysis"
(→Some basic summary statistics) |
|||
Line 1: | Line 1: | ||
− | In this section we shall demonstrate how to do some basic data analysis on data in a dataframe. Here is an [http://youtu.be/szq0eZCpsgU?hd=1 online demonstration] of the material covered on this page. | + | In this section we shall demonstrate how to do some basic data analysis on data in a dataframe. Here is an [http://youtu.be/szq0eZCpsgU?hd=1 online demonstration] of some of the material covered on this page. |
== Basic Data Analysis == | == Basic Data Analysis == |
Revision as of 10:51, 23 July 2015
In this section we shall demonstrate how to do some basic data analysis on data in a dataframe. Here is an online demonstration of some of the material covered on this page.
Contents
Basic Data Analysis
The easiest way to find basic summary statistics on your variables contained in a dataframe is the following command:
summary(mydata)
You will find that this will provide a range of summary statistics for each variable (Minimum and Maximum, Quartiles, Mean and Median). If the dataframe contains a lot of variables, as the dataframe based on mroz.xls, this output can be somewhat lengthy. Say you are only interested in the summary statistics for two of the variables hours and husage
, then you would want to select these two variables only. The way to do that is the following:
summary(mydata[c("hours","husage")])
This will produce the following output:
hours husage Min. : 0.0 Min. :30.00 1st Qu.: 0.0 1st Qu.:38.00 Median : 288.0 Median :46.00 Mean : 740.6 Mean :45.12 3rd Qu.:1516.0 3rd Qu.:52.00 Max. :4950.0 Max. :60.00
Another extremely useful statistic is the correlation between different variables. This is achieved with the cor( )
function. Let's say we want the correlation between educ, motheduc, fatheduc
, then we use in the same manner:
cor(mydata[c("educ","motheduc","fatheduc")])
resulting in the following correlation matrix
educ motheduc fatheduc educ 1.0000000 0.4353365 0.4424582 motheduc 0.4353365 1.0000000 0.5730717 fatheduc 0.4424582 0.5730717 1.0000000
Selecting variables
In what we did above we selected a small number of variables from a larger dataset (saved in a dataframe), the way we did that was to call the dataframe and then in square brackets indicate which variables we wanted to select. To understand what this does, go to your console and call
test1 = mydata[c("hours")]
which will create a new dataframe which includes only the one variable hours
. This is very useful, as some functions need to be applied to a dataframe (see for example the "empirical" function in R_Packages).
There is another way to select the hours
variable from the dataframe. Try:
test2 = mydata$
hours
This will also select the hours
variable. But if you check your environment tab you will see that the data have now been saved in a different type of R object, a list or vector. Some functions will require such an object as input (see for example the "sd" function below).
Dealing with missing observations
So far all is honky dory. Let's show some difficulties/issues. Consider we want to calculate the correlation between educ, wage
cor(mydata[c("educ","wage")])
The output we get is:
educ wage educ 1 NA wage NA 1
The reason for R's inability to calculate a correlation between these two variables can be seen here:
> summary(mydata[c("hours","wage")]) hours wage Min. : 0.0 Min. : 0.1282 1st Qu.: 0.0 1st Qu.: 2.2626 Median : 288.0 Median : 3.4819 Mean : 740.6 Mean : 4.1777 3rd Qu.:1516.0 3rd Qu.: 4.9707 Max. :4950.0 Max. :25.0000 NA's :325
The important information is that the variable wage
has 325 missing observations (NA). It is not immediately obvious how to tackle this issue. We need to consult either Dr. Google or the R help function. The latter is done by typing ?cor
. The help will pop up in the "Help" tab on the right hand side. You will need to read through it to find a solution to the issue. Frankly, the clever people who write the R software are not always the most skilful in writing clearly and it is often most useful to go to the bottom of the help where you can usually find some examples. If you do that you will find that the solution to our problem is the following:
> cor(mydata[c("educ","wage")],use = "complete") educ wage educ 1.0000000 0.3419544 wage 0.3419544 1.0000000
It is perhaps worth adding a word of explanation here. cor( )
is what is called a function. It needs some inputs to work. The first input is the data for which to calculate correlations, mydata[c("educ","wage")]
. Most functions also have what are called parameters. These are like little dials and levers to the function which change how the function works. And one of these levers can be used to tell the function to only use observations that are complete, i.e. don't have missing observations, use = "complete"
. Read the help function to see what over levers are at your disposal.
Using Subsets of Data
Often you will want to perform some analysis on a subset of data. The way to do this in R is to use the subset function, together with a logical (boolean) statement. I will first write down the statement and then explain what it does:
mydata.sub1 <- subset(mydata, hours > 0)
On the left hand side of <-
we have a new object named mydata.sub1
. On the right hand side of <-
we can see how that new object is defined. We are using the function subset()
which has been designed to select observations and/or columns from a dataframe such as mydata
. This function needs at least two inputs. The first input is the dataframe from which we are selecting observations and variables. Here we are selecting from mydata
. The second element indicates which observations/rows we want to select. hours > 0
tells R to select all those observations for which the variable hours
is larger than 0.
Often (if not always) you will not remember how exactly a function works. The internet is then usually a good source, but in your console you could also type ?subset
which would open a help function. There you could see that you could add a third input to the subset function which would indicate which variables you want to include (e.g. select = c(hours, wage)
which would only select these two variables). By not using this third input we indicate to R that it should select all variables in mydata
.
Logical/Boolean Statements
The way in which we selected the observations, i.e. by using the logical statement hours > 0
is worth dwelling on for a moment. These type of logical statements create variables in R that are given the logical
data type. Sometimes these are also called boolean variables.
To see what is special about these go to your console and just type something like 5>9
and then press ENTER. You will realise that R is a clever little thing and will tell you that in fact 5 is not larger than 9 by returning the answer FALSE
. When provided R with hours > 0
, the software, for all our 753 observations, checks whether the value of the hour variable is larger than 0 or not. It will create a variable (vector) with 753 entries and in each entry there will be either a hours >TRUE
or hours >FALSE
, depending on whether the respective value is larger than 0 or not.
You can create logical variables on the basis of more complicated logical statements as well. You can combine statements by noting that &
represents AND, and |
represents OR. You will want to use one of the following relational operators: ==
checks whether two things are equal; !=
will check if two things are unequal; >
and <
take their well known roles. To figure out how these work, try the following statements in your console and see whether you can guess the right answers:
(3 > 2) & (3 > 1) (3 > 2) & (3 > 6) (3 > 5) & (3 > 6) (3 > 2) | (3 > 1) (3 > 2) | (3 > 6) (3 > 5) | (3 > 6) ((3 == 5) & (3 > 2)) | (3 > 1)
Being comfortable with these logical statements will make the life of every programmer much easier.
Some basic summary statistics
Especially when yo are dealing with categorical data it is often useful to look at contingency tables, i.e. tables with counts of all possible values. The function that achieves this is the table
function. Try:
> table(mydata$
kidslt6)
and you will get:
0 1 2 3 606 118 26 3
which tells you that there were 118 women with one child younger than 6. It turns out that you could use the table
function also with the alternative way of selecting a variable, i.e. mydata[c("kidslt6")]
. There is, unfortunately no easy way of knowing which function works with which way of selecting variables from a dataframe. You will just have to try or read the relevant help function.
You can also produce a cross tabulation by adding a second variable:
> table(mydata$
kidslt6,mydata$
kidsge6)
which produces:
0 1 2 3 4 5 6 7 8 0 229 144 121 75 26 9 0 1 1 1 17 35 36 24 3 3 0 0 0 2 11 5 5 3 1 0 1 0 0 3 1 1 0 1 0 0 0 0 0
There are 24 women that have three children at least 6 years old and one younger child.
There are a number of basic summary statistics that are part of every basic data toolbox. Being able to calculate means, medians and standard deviations for a set of data. Let's take a particular variable, the wage
variable. Try the following command:
mean(mydata$
wage, hours > 0)
You could replace mean with median, sd, var, min or max (which all represent obvious sample summary statistics), the result is always that you will find an unpleasant NA
. Why is this? If you look again at the wage
data you will see that there are missing data in here (as we already discovered above). Now check the details of any of your function by, say, typing ?mean
into the console. reading through the help function you will find that you will need to add the parameter na.rm=TRUE
to your function call. So:
mean(mydata$
wage,na.rm=TRUE)
will deliver the sample mean of 4.177682
. This additional parameter essentially instructs the function mean to remove all NAs.
While you could produce all sorts of summary statistics individually as just indicated, you could also obtain all in one go by using
summary(mydata$
wage,na.rm=TRUE)
which will return the mean, median, max, min, and quartiles (but annoyingly not the standard deviation).
Re-classifying categorical/factor variables
When you have categorical data you may often want to re-classify your categories into new, usually broader categories. In the current data-set this isn't really an issue, but let's say we did have an ethnicity variable in our dataframe and for arguments sake assume that this variable these data are in mydata$Ethnicity
and let's assume that these data are encoded as a factor variable.
The reason for re-classifying (or re-coding) is that sometimes we will have too small categories. Too find your frequencies you can use the table(mydata$Ethnicity)
or summary(mydata$Ethnicity)
command. If you do that you may find something like:
Asian Black Mixed Asian 120 254 2 Mixed Black Mixed White White 15 12 350
Let's say you want to amalgamate the Mixed categories into one big "Mixed" category. Here is the easiest way to do this. We create a new variable in our dataframe
mydata$Eth_cat <- as.character(0) # new variable is called Eth_cat, initially as character variable
Now we need to define the variables this new variable should take:
mydata$
Eth_cat[mydata$
Ethnicity == "Asian"] <- "Asian" mydata$
Eth_cat[mydata$
Ethnicity == "Black" ] <- "Black" mydata$
Eth_cat[mydata$
Ethnicity == "Mixed Asian" ] <- "Mixed" mydata$
Eth_cat[mydata$
Ethnicity == "Mixed Black" ] <- "Mixed" mydata$
Eth_cat[mydata$
Ethnicity == "Mixed White" ] <- "Mixed" mydata$
Eth_cat[mydata$
Ethnicity == "White"] <- "White"
In each line we are selecting all rows in the dataframe for which the Ethnicity variable takes a certain value, e.g. mydata$Eth_cat[mydata$Ethnicity == "Asian"]
, all rows with Asian respondents. Then we assign <- "Asian"
to these rows. We do this for all possible categories in Ethnicity. What we have created at this stage is a new variable with all the desired variables. It is, however, at this stage a text based variable and it may be of advantage to transform it to a factor (categorical) variable. This is very straightforward:
mydata$
Eth_cat <- as.factor(mydata$
Eth_cat)
and you are good to go!