Joint Probability Distributions

From ECLR
Revision as of 16:48, 16 August 2013 by Rb (talk | contribs)
Jump to: navigation, search



Joint Probability Distributions

The objective of statistics is to learn about population characteristics: this was first mentioned in the Introductory Section. An EXPERIMENT is then any process which generates data. It is easy to imagine circumstances where an experiment (say a few interviewers asking couples walking down the road) generates observations, for example the weekly income of husbands and the weekly income of wives in a particular population of husbands and wives. One possible use of such sample data is to investigate the relationship between the observed values of the two variables. In the Section Correlation and Regression we discussed how to use correlation and regression to summarise the extent and nature of any linear relationship between these observed values. It has to be said that the discussion of relationships between variables in this section does but scratch the surface of a very large topic. Subsequent courses in Econometrics take the analysis of relationships between two or more variables much further.

If these two pieces of information generated by the experiment are considered to be the values of two random variables defined on the SAMPLE SPACE of an experiment (see this [[ProbabilityIntro | Section]]), then the discussion of random variables and probability distributions needs to be extended to the multivariate case.

Let [math]X[/math] and [math]Y[/math] be the two random variables: for simplicity, they are considered to be discrete random variables. The outcome of the experiment is a pair of values [math]\left( x,y\right)[/math]. The probability of this outcome is a joint probability which can be denoted

[math]\Pr \left( X=x\cap Y=y\right) ,[/math]

emphasising the analogy with the probability of a joint event [math]\Pr \left( A\cap B\right)[/math], or, more usually, by

[math]\Pr \left( X=x,Y=y\right) .[/math]

  • The collection of these probabilities, for all possible combinations of [math]x[/math] and [math]y[/math], is the joint probability distribution of [math]X[/math] and [math]Y, [/math] denoted

    [math]p\left( x,y\right) =\Pr \left( X=x,Y=y\right) .[/math]

  • The Axioms of Probability discussed in this Section carry over to imply

    [math]0\leqslant p\left( x,y\right) \leqslant 1,[/math]

    [math]\sum_{x}\sum_{y}p\left( x,y\right) =1,[/math]

  • where the sum is over all [math]\left( x,y\right)[/math] combinations.

Examples

Example 1

Let [math]H[/math] and [math]W[/math] be the random variables representing the population of weekly incomes of husbands and wives, respectively, in some country. There are only three possible weekly incomes, £0, £100 or £ 200. The joint probability distribution of [math]H[/math] and [math]W[/math] is represented as a table:

[math]\begin{tabular}{|ll|lll|} \hline & & \multicolumn{3}{|l|}{Values of $H:$} \\ Probabilities & & $0$ & $1$ & $2$ \\ \hline Values of $W:$ & $0$ & $0.05$ & $0.15$ & $0.10$ \\ & $1$ & $0.10$ & $0.10$ & $0.30$ \\ & $2$ & $0.05$ & $0.05$ & $0.10$ \\ \hline \end{tabular}%[/math]

Then we can read off, for example, that

[math]\Pr \left( H=0,W=0\right) =0.05,[/math]

or that in this population, [math]5\%[/math] of husbands and wives have each a zero weekly income.

Footnotes