<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://eclr.humanities.manchester.ac.uk/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Rb</id>
		<title>ECLR - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://eclr.humanities.manchester.ac.uk/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Rb"/>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php/Special:Contributions/Rb"/>
		<updated>2026-04-25T02:40:30Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.30.1</generator>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Probability_Intro&amp;diff=4280</id>
		<title>Probability Intro</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Probability_Intro&amp;diff=4280"/>
				<updated>2022-06-02T09:59:18Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
= Introducing Probability =&lt;br /&gt;
&lt;br /&gt;
So far we have been looking at ways of summarising samples of data drawn from an underlying population of interest. Although at times tedious, all such arithmetic calculations are fairly mechanical and straightforward to apply. To remind ourselves, one of the primary reasons for wishing to summarise data is so assist in the development of inferences about the population from which the data were taken. That is to say, we would like to elicit some information about the mechanism which generated the observed data.&lt;br /&gt;
&lt;br /&gt;
We now start on the process of developing mathematical ways of formulating inferences and this requires the use of &amp;#039;&amp;#039;probability&amp;#039;&amp;#039;. This becomes clear if we think back to one of the early questions posed in this course: &amp;#039;&amp;#039;prior to sampling is it possible to predict with absolute certainty what will be observed&amp;#039;&amp;#039;? The answer to this question is &amp;#039;&amp;#039;no&amp;#039;&amp;#039;; although it would be of interest to know how &amp;#039;&amp;#039;likely&amp;#039;&amp;#039; it is that certain values would be observed. Or, what is the &amp;#039;&amp;#039;probability&amp;#039;&amp;#039; of observing certain values?&lt;br /&gt;
&lt;br /&gt;
Before proceeding, we need some more tools:&lt;br /&gt;
&lt;br /&gt;
= Venn diagrams =&lt;br /&gt;
&lt;br /&gt;
Venn diagrams (and diagrams in general) are of enormous help in trying to understand, and manipulate probability. We begin with some basic definitions, some of which we have encountered before.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Experiment:&amp;#039;&amp;#039;&amp;#039; any process which, when applied, provides data or an outcome; e.g., rolling a die and observing the number of dots on the upturned face; recording the amount of rainfall in Manchester over a period of time.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Sample Space:&amp;#039;&amp;#039;&amp;#039; set of possible outcomes of an experiment; e.g., &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; (or &amp;lt;math&amp;gt;\Omega &amp;lt;/math&amp;gt;) &amp;lt;math&amp;gt;=&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\{1,2,3,4,5,6\}&amp;lt;/math&amp;gt;, which is the sample space of rolling a dice. Or &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;=&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\{x;x\geq 0\}&amp;lt;/math&amp;gt;, which is the sample space of an experiment where the outcomes can be any real non-negative number, or ‘&amp;#039;&amp;#039;the set of real non-negative real numbers&amp;#039;&amp;#039;’.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Simple Event&amp;#039;&amp;#039;&amp;#039;: just one of the possible outcomes on &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Event:&amp;#039;&amp;#039;&amp;#039; a &amp;#039;&amp;#039;subset&amp;#039;&amp;#039; of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;, denoted &amp;lt;math&amp;gt;E\subset S&amp;lt;/math&amp;gt;; e.g., &amp;lt;math&amp;gt;E=\left\{ 2,4,6\right\}&amp;lt;/math&amp;gt; (i.e. any even number on a dice) or &amp;lt;math&amp;gt;E=\left\{ x;4&amp;lt;x\leq 10\right\} ,&amp;lt;/math&amp;gt; which means ‘&amp;#039;&amp;#039;the set of real numbers which are strictly bigger than&amp;#039;&amp;#039; &amp;lt;math&amp;gt;4&amp;lt;/math&amp;gt; &amp;#039;&amp;#039;but less than or equal to &amp;#039;&amp;#039;&amp;lt;math&amp;gt;10&amp;lt;/math&amp;gt;’.&amp;lt;br /&amp;gt;Note that an event, &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;, is a collection of simple events.&lt;br /&gt;
&lt;br /&gt;
Such concepts can be represented by means of the following Venn Diagram:&lt;br /&gt;
&lt;br /&gt;
[[File:Venn_1.jpg|frameless|600px]]&lt;br /&gt;
&lt;br /&gt;
The sample space, &amp;lt;math&amp;gt;S,&amp;lt;/math&amp;gt; is depicted as a closed rectangle, and the event &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; is a closed loop wholly contained within &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; and we write (in set notation) &amp;lt;math&amp;gt;E\subset S&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In dealing with probability, and in particular the probability of an event (or events) occurring, we shall need to be familiar with &amp;#039;&amp;#039;&amp;#039;UNIONS, INTERSECTIONS&amp;#039;&amp;#039;&amp;#039; and &amp;#039;&amp;#039;&amp;#039;COMPLEMENTS&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
To illustrate these concepts, consider the sample space &amp;lt;math&amp;gt;S=\{x;x\geq 0\},\,&amp;lt;/math&amp;gt; with the following events defined on &amp;lt;math&amp;gt;S,&amp;lt;/math&amp;gt; as depicted in Figure 3.2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;E=\{x;4&amp;lt;x\leq 10\},\,F=\{x;7&amp;lt;x\leq 17\},\,G=\{x;x&amp;gt;15\},\,H=\{x;9&amp;lt;x\leq&lt;br /&gt;
13\}. &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| (a) Event &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;: A subset&lt;br /&gt;
| (b) Union: &amp;lt;math&amp;gt;E\cup F&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| [[File:Venn_2a.jpg|frameless|300px]] &lt;br /&gt;
| [[File:Venn_2b.jpg|frameless|300px]]&lt;br /&gt;
|-&lt;br /&gt;
| (c) Intersection: &amp;lt;math&amp;gt;E\cap F&amp;lt;/math&amp;gt;&lt;br /&gt;
| (d) The Null set/event: &amp;lt;math&amp;gt;E\cap G=\emptyset &amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| [[File:Venn_2c.jpg|frameless|300px]] &lt;br /&gt;
| [[File:Venn_2d.jpg|frameless|300px]]&lt;br /&gt;
|-&lt;br /&gt;
| (e) Complement of &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\bar{E}&amp;lt;/math&amp;gt;&lt;br /&gt;
| (f) Subset of &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;H\subset F&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;H\cap F=H&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| [[File:Venn_2e.jpg|frameless|300px]] &lt;br /&gt;
| [[File:Venn_2f.jpg|frameless|300px]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
        &lt;br /&gt;
&lt;br /&gt;
* The &amp;#039;&amp;#039;union&amp;#039;&amp;#039; of &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; is denoted &amp;lt;math&amp;gt;E\cup F,&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;E\cup F=\{x;4&amp;lt;x\leq 17\};&amp;lt;/math&amp;gt; i.e., it contains elements (simple events) which are either in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; or in &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; or (perhaps) in both. This is illustrated on the Venn diagram by the dark shaded area in diagram (b).&lt;br /&gt;
* The&amp;#039;&amp;#039; intersection&amp;#039;&amp;#039; of &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; is denoted &amp;lt;math&amp;gt;E\cap F,&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;E\cap F=\left\{ x;7\leq x\leq 10\right\} ;&amp;lt;/math&amp;gt; i.e., it contains elements (simple events) which are common to both &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F.&amp;lt;/math&amp;gt; Again this is depicted by the dark shaded area in (c). If events have no elements in common (as, for example, &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;) then they are said to be &amp;#039;&amp;#039;mutually exclusive&amp;#039;&amp;#039;, and we can write &amp;lt;math&amp;gt;E\cap G=\emptyset ,&amp;lt;/math&amp;gt; meaning the &amp;#039;&amp;#039;null set&amp;#039;&amp;#039; which contains no elements. Such a situation is illustrated on the Venn Diagram by events (the two shaded closed loops in (d)) which do not overlap. Notice however that &amp;lt;math&amp;gt;G\cap F\neq \emptyset ,&amp;lt;/math&amp;gt; since &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; have elements in common.&lt;br /&gt;
* The &amp;#039;&amp;#039;complement&amp;#039;&amp;#039; of an event &amp;lt;math&amp;gt;E,&amp;lt;/math&amp;gt; say, is everything defined on the sample space which is not in &amp;lt;math&amp;gt;E.&amp;lt;/math&amp;gt; This event is denoted &amp;lt;math&amp;gt;\bar{E}&amp;lt;/math&amp;gt;, the dark shaded area in (e); here &amp;lt;math&amp;gt;\bar{E}=\left\{ x;x\leq 4\right\} \cup \left\{ x;x&amp;gt;10\right\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
* Finally note that &amp;lt;math&amp;gt;H&amp;lt;/math&amp;gt; is a sub-set of &amp;lt;math&amp;gt;F;&amp;lt;/math&amp;gt; see (f). It is depicted as the dark closed loop wholly contained within &amp;lt;math&amp;gt;F,&amp;lt;/math&amp;gt; the lighter shaded area, so that &amp;lt;math&amp;gt;H\cap F=H;&amp;lt;/math&amp;gt; if an element in the sample space is a member of &amp;lt;math&amp;gt;H&amp;lt;/math&amp;gt; then it must also be member of &amp;lt;math&amp;gt;F.&amp;lt;/math&amp;gt; (In mathematical logic, we employ this scenario to indicate that “&amp;lt;math&amp;gt;H&amp;lt;/math&amp;gt; implies &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt;”, but not necessarily vice-versa.) Notice that &amp;lt;math&amp;gt;G\cap H=\emptyset &amp;lt;/math&amp;gt; but &amp;lt;math&amp;gt;H\cap E\neq \emptyset&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Probability =&lt;br /&gt;
&lt;br /&gt;
The term &amp;#039;&amp;#039;probability&amp;#039;&amp;#039; (or some equivalent) is used in everyday conversation and so can not be unfamiliar to the reader. We talk of the probability, or chance, of rain; the likelihood of England winning the World Cup; or, perhaps more scientifically, the chance of getting a &amp;lt;math&amp;gt;6&amp;lt;/math&amp;gt; when rolling a die. What we shall now do is develop a coherent theory of probability; a theory which allows us to combine and manipulate probabilities in a consistent and meaningful manner. We shall describe ways of dealing with, and describing, uncertainty. This will involve &amp;#039;&amp;#039;rules&amp;#039;&amp;#039; which govern our use of terms like probability.&lt;br /&gt;
&lt;br /&gt;
There have been a number of different approaches (interpretations) of probability. Most depend, at least to some extent, on the notion of relative frequency as now described:&lt;br /&gt;
&lt;br /&gt;
* Suppose an experiment has an outcome of interest &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;. The &amp;#039;&amp;#039;relative frequency interpretation&amp;#039;&amp;#039; of probability says that assuming the experiment can be repeated a large number of times then the relative frequency of observing the outcome &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; will settle down to a &amp;#039;&amp;#039;number&amp;#039;&amp;#039;, denoted &amp;lt;math&amp;gt;\Pr (E),&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;P(E)&amp;lt;/math&amp;gt; or Prob&amp;lt;math&amp;gt;(E),&amp;lt;/math&amp;gt; called the &amp;#039;&amp;#039;&amp;#039;probability&amp;#039;&amp;#039;&amp;#039; of &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This is illustrated in the next Figure where the proportion of heads obtained after &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; flips of a fair coin is plotted against &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;, as &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; increases; e.g., of the first 100 flips, 46 were heads (&amp;lt;math&amp;gt;46\%&amp;lt;/math&amp;gt;). Notice that the plot becomes less ‘wobbly’ after about &amp;lt;math&amp;gt;n=&amp;lt;/math&amp;gt;140 and appears to be settling down to the value of &amp;lt;math&amp;gt;\frac{1}{2}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[File:Prob_coin.jpg|frameless|600px]]&lt;br /&gt;
&lt;br /&gt;
Due to this interpretation of probability, we often use observed sample proportions to approximate underlying probabilities of interest; see, for example, Question 4 of Exercise 2. There are, of course, other interpretations of probability; e.g., the subjective interpretation which simply expresses the strength of one’s belief about an event of interest such as whether Manchester United will win the European Cup! Any one of these interpretations can be used in practical situations provided the implied notion of probability follows a simple set of &amp;#039;&amp;#039;axioms&amp;#039;&amp;#039; or &amp;#039;&amp;#039;rules&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== The axioms of probability ==&lt;br /&gt;
&lt;br /&gt;
There are just &amp;#039;&amp;#039;three &amp;#039;&amp;#039;basic rules that must be obeyed when dealing with probabilities:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;For any event &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; defined on &amp;lt;math&amp;gt;S,&amp;lt;/math&amp;gt; i.e., &amp;lt;math&amp;gt;E\subset S,\,\,\Pr (E)\geq 0&amp;lt;/math&amp;gt;; &amp;#039;&amp;#039;probabilities are non-negative&amp;#039;&amp;#039;.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\Pr (S)=1;&amp;lt;/math&amp;gt; &amp;#039;&amp;#039;having defined the sample space of outcomes, one of these outcomes must be observed&amp;#039;&amp;#039;.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;If events &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; are mutually exclusive defined on &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;, so that &amp;lt;math&amp;gt;E\cap F=\emptyset &amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;\Pr \left( E\cup F\right) =\Pr \left( E\right)+\Pr \left( F\right) .&amp;lt;/math&amp;gt; In general, for any set of mutually exclusive events, &amp;lt;math&amp;gt;E_{1},E_{2},\ldots ,E_{k},&amp;lt;/math&amp;gt; defined on &amp;lt;math&amp;gt;S:&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\Pr (E_{1}\cup E_{2}\cup \ldots \cup E_{k})=\Pr (E_{1})+\Pr (E_{2})+\ldots\Pr (E_{k})&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;i.e., &amp;lt;math&amp;gt;\Pr \left( \bigcup_{j=1}^{k}E_{j}\right) =\sum_{j=1}^{k}\Pr (E_{j}).&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In terms of the Venn Diagram, one can (and should) usefully think of the area of &amp;lt;math&amp;gt;E,&amp;lt;/math&amp;gt; relative to that of &amp;lt;math&amp;gt;S,&amp;lt;/math&amp;gt; as providing an indication of probability. (Note, from axiom 2, that the area of &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is implicitly normalised to be unity).&lt;br /&gt;
&lt;br /&gt;
Also observe that, contrary to what you may have believed, it is not one of the rules that &amp;lt;math&amp;gt;\Pr (E)\leq 1&amp;lt;/math&amp;gt; for any event &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;. Rather, this is an implication of the &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt; rules given:&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;implications: &amp;#039;&amp;#039;&amp;#039;it must be that for any event &amp;lt;math&amp;gt;E,&amp;lt;/math&amp;gt; defined on &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;E\cap \bar{E}=\emptyset &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;E\cup \bar{E}=S.&amp;lt;/math&amp;gt; By Axiom &amp;lt;math&amp;gt;1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\Pr (E)\geq 0&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\Pr \left( \bar{E}\right) \geq 0&amp;lt;/math&amp;gt; and by Axiom &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\Pr(E)+\Pr (\bar{E})=\Pr (S).&amp;lt;/math&amp;gt; So &amp;lt;math&amp;gt;\Pr \left( E\right) +\Pr \left( \bar{E}\right) =1,&amp;lt;/math&amp;gt; by Axiom &amp;lt;math&amp;gt;2.&amp;lt;/math&amp;gt; This implies that&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;math&amp;gt;0\leq \Pr (E)\leq 1&amp;lt;/math&amp;gt;&lt;br /&gt;
# &amp;lt;math&amp;gt;\Pr (\bar{E})=1-\Pr (E)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first of these is what we might have expected from probability (a number lying between &amp;lt;math&amp;gt;0&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;). The second implication is also very important; it says that the probability of &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; not happening is ‘&amp;#039;&amp;#039;one minus the probability of it happening&amp;#039;&amp;#039;’. Thus when rolling a die, the probability of getting &amp;lt;math&amp;gt;6&amp;lt;/math&amp;gt; is one minus the probability of getting either a &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;2&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;4&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;5.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These axioms imply how to calculate probabilities on a sample space of equally likely outcomes. For example, and as we have already noted, the experiment of rolling a fair die defines a sample space of six, mutually exclusive and equally likely outcomes (&amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;6&amp;lt;/math&amp;gt; dots on the up-turned face). The axioms then say that each of the six probabilities are positive, add to 1 and are all the same. Thus, the probability of any one of the outcomes must be simply &amp;lt;math&amp;gt;\frac{1}{6};&amp;lt;/math&amp;gt; which may accord with your intuition. A similar sort of analysis reveals that the probability of drawing a club from a deck of &amp;lt;math&amp;gt;52&amp;lt;/math&amp;gt; cards is &amp;lt;math&amp;gt;\frac{13}{52},&amp;lt;/math&amp;gt; since any one of the &amp;lt;math&amp;gt;52&amp;lt;/math&amp;gt; cards has an equal chance of being drawn and &amp;lt;math&amp;gt;13&amp;lt;/math&amp;gt; of them are clubs; i.e., &amp;lt;math&amp;gt;13&amp;lt;/math&amp;gt; of the &amp;lt;math&amp;gt;52&amp;lt;/math&amp;gt; are clubs, so the probability of drawing a club is &amp;lt;math&amp;gt;\frac{13}{52}.&amp;lt;/math&amp;gt; Notice the importance of the assumption of equally likely outcomes here.&lt;br /&gt;
&lt;br /&gt;
In this, and the next section of notes, we shall see how these axioms can be used. Firstly, consider the construction of a probability for the &amp;#039;&amp;#039;union&amp;#039;&amp;#039; of two events; i.e., the probability that &amp;#039;&amp;#039;either &amp;#039;&amp;#039;&amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; or (perhaps) &amp;#039;&amp;#039;both &amp;#039;&amp;#039;will occur. Such a probability is embodied in the &amp;#039;&amp;#039;addition rule of probability&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== The addition rule of probability ==&lt;br /&gt;
&lt;br /&gt;
When rolling a fair die, let &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; denote the event of an “odd number of dots” and &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; the event of the “number of dots being greater than, or equal, to &amp;lt;math&amp;gt;4&amp;lt;/math&amp;gt;”&amp;lt;math&amp;gt;.&amp;lt;/math&amp;gt; What is the probability of the event &amp;lt;math&amp;gt;E\cup F&amp;lt;/math&amp;gt;? To calculate this we can collect together all the mutually exclusive (simple) events which comprise &amp;lt;math&amp;gt;E\cup F&amp;lt;/math&amp;gt;, and then add up the probabilities (by axiom 3). These simple events are &amp;lt;math&amp;gt;1,3,4,5&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;6&amp;lt;/math&amp;gt; dots. Each has a probability of &amp;lt;math&amp;gt;\frac{1}{6},&amp;lt;/math&amp;gt; so the required total probability is: &amp;lt;math&amp;gt;\Pr \left( E\cup F\right) =\frac{5}{6}&amp;lt;/math&amp;gt;. Consider carefully how this probability is constructed and note, in particular, that &amp;lt;math&amp;gt;\Pr \left( E\cup F\right) \neq \Pr \left( E\right) +\Pr \left( F\right) &amp;lt;/math&amp;gt; since &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; have a simple event in common (namely &amp;lt;math&amp;gt;5&amp;lt;/math&amp;gt; dots).&lt;br /&gt;
&lt;br /&gt;
In general, we can calculate the probability of the union of events using the &amp;#039;&amp;#039;addition rule of probability&amp;#039;&amp;#039;, as follows.&lt;br /&gt;
&lt;br /&gt;
* For any events, &amp;lt;math&amp;gt;E\subset S&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F\subset S:\Pr (E\cup F)=\Pr (E)+\Pr (F)-\Pr (E\cap F).&amp;lt;/math&amp;gt; So, in general, &amp;lt;math&amp;gt;\Pr \left( E\cup F\right) \leq \Pr (E)+\Pr (F).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This generalises to three events, &amp;lt;math&amp;gt;E_{1},E_{2}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;E_{3}&amp;lt;/math&amp;gt; as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\Pr (E_{1}\cup E_{2}\cup E_{3}) &amp;amp;=&amp;amp;\Pr (E_{1})+\Pr (E_{2})+\Pr (E_{3}) \\&lt;br /&gt;
&amp;amp;&amp;amp;-\Pr (E_{1}\cap E_{2})-\Pr (E_{1}\cap E_{3})-\Pr (E_{2}\cap E_{3}) \\&lt;br /&gt;
&amp;amp;&amp;amp;+\Pr (E_{1}\cap E_{2}\cap E_{3}).\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We can demonstrate this as follows.&lt;br /&gt;
&lt;br /&gt;
Note that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;E\cup F=\left( E\cap \bar{F}\right) \cup \left( E\cap F\right) \cup \left(\bar{E}\cap F\right)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the union of &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt; mutually exclusive events. These mutually exclusive events are depicted by the shaded areas &amp;lt;math&amp;gt;\mathbf{a,}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{b}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{c}&amp;lt;/math&amp;gt;, respectively, in the next Figure.&lt;br /&gt;
&lt;br /&gt;
[[File:Prob_add.jpg|frameless|500px]]&lt;br /&gt;
&lt;br /&gt;
Then by Axiom &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt;, and from the fact that the three events &amp;lt;math&amp;gt;\left( E\cap\bar{F}\right) &amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\left( E\cap F\right) &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left( \bar{E}\cap F\right)&amp;lt;/math&amp;gt; are mutually exclusive so that the “area” occupied by &amp;lt;math&amp;gt;E\cup F&amp;lt;/math&amp;gt; is simply &amp;lt;math&amp;gt;\mathbf{a+b+c,}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( E\cup F\right) =\Pr \left( E\cap \bar{F}\right) +\Pr \left( \bar{E}\cap F\right) +\Pr \left( E\cap F\right) .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
But also by Axiom &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt;, since &amp;lt;math&amp;gt;E=\left( E\cap \bar{F}\right) \cup \left(E\cap F\right) &amp;lt;/math&amp;gt;, it must be that &amp;lt;math&amp;gt;\Pr (E)=\Pr \left( E\cap \bar{F}\right)+\Pr (E\cap F);&amp;lt;/math&amp;gt; similarly, &amp;lt;math&amp;gt;\Pr \left( \bar{E}\cap F\right) =\Pr \left(F\right) -\Pr \left( E\cap F\right)&amp;lt;/math&amp;gt;. Putting all of this together gives&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr (E\cup F)=\Pr (E)+\Pr (F)-\Pr (E\cap F).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;F&amp;lt;/math&amp;gt; are mutually exclusive, so that &amp;lt;math&amp;gt;E\cap F=\emptyset&amp;lt;/math&amp;gt;, this rule reveals Axiom 2: &amp;lt;math&amp;gt;\Pr (E\cup F)=\Pr (E)+\Pr (F)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;#039;&amp;#039;Example: &amp;#039;&amp;#039;What is the probability of drawing a Queen (&amp;lt;math&amp;gt;Q &amp;lt;/math&amp;gt;) or a Club (&amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;) in a single draw from a pack of cards? Now, &amp;lt;math&amp;gt;4&amp;lt;/math&amp;gt; out of &amp;lt;math&amp;gt;52 &amp;lt;/math&amp;gt; cards are Queens, so &amp;lt;math&amp;gt;\Pr \left( Q\right) =\frac{4}{52},&amp;lt;/math&amp;gt; whilst &amp;lt;math&amp;gt;\Pr\left( C\right) =\frac{13}{52}.&amp;lt;/math&amp;gt; The probability of drawing the Queen of Clubs is simply &amp;lt;math&amp;gt;\frac{1}{52};&amp;lt;/math&amp;gt; i.e., &amp;lt;math&amp;gt;\Pr \left( Q\cap C\right) =\frac{1}{52}&amp;lt;/math&amp;gt;. What we require is a Club or a Queen, for which the probability is&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\Pr \left( Q\cup C\right) &amp;amp;=&amp;amp;\Pr \left( Q\right) +\Pr \left( C\right) -\Pr\left( Q\cap C\right) \\&lt;br /&gt;
&amp;amp;=&amp;amp;\frac{4}{52}+\frac{13}{52}-\frac{1}{52} \\&lt;br /&gt;
&amp;amp;=&amp;amp;\frac{16}{52}=\frac{4}{13}.\end{aligned}&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;#039;&amp;#039;Example: &amp;#039;&amp;#039;Consider a car journey from Manchester to London via the M6 and M1. Let &amp;lt;math&amp;gt;E=&amp;lt;/math&amp;gt; &amp;#039;&amp;#039;heavy traffic somewhere on route&amp;#039;&amp;#039; and &amp;lt;math&amp;gt;F=&amp;lt;/math&amp;gt; &amp;#039;&amp;#039;roadworks somewhere on route&amp;#039;&amp;#039;. It is estimated that &amp;lt;math&amp;gt;\Pr (E)=0.8&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\Pr (F)=0.4,&amp;lt;/math&amp;gt; whilst the probability of NOT encountering both is &amp;lt;math&amp;gt;\Pr (\overline{E\cap F})=0.6.&amp;lt;/math&amp;gt; What is the probability of encountering heavytraffic or roadworks?&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;We require &amp;lt;math&amp;gt;\Pr \left( E\cup F\right) .&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\Pr (E\cup F) &amp;amp;=&amp;amp;\Pr (E)+\Pr (F)-\Pr (E\cap F) \\&lt;br /&gt;
&amp;amp;=&amp;amp;\Pr (E)+\Pr (F)-(1-\Pr (\overline{E\cap F})) \\&lt;br /&gt;
&amp;amp;=&amp;amp;0.8+0.4-1+0.6 \\&lt;br /&gt;
&amp;amp;=&amp;amp;0.8=\Pr (E)\end{aligned}&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Notice that this implies, in this case, that &amp;lt;math&amp;gt;F\subset E&amp;lt;/math&amp;gt; (why?). This &amp;#039;&amp;#039;model &amp;#039;&amp;#039;then implies that when there are roadworks somewhere on route you are bound to encounter heavy traffic; on the other hand, you can encounter heavy traffic on route without ever passing through roadworks. (My own experience of this motorway inclines me towards this implication!)&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similar concepts apply when manipulating proportions as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;#039;&amp;#039;Example&amp;#039;&amp;#039;: A sample of 1000 undergraduates were asked whether they took either Mathematics, Physics or Chemistry at A-level. The following responses were obtained: 100 just took Mathematics; 70 just took Physics; 100 just took Chemistry; 150 took Mathematics and Physics, but not Chemistry; 40 took Mathematics and Chemistry, but not Physics; and, 240 took Physics and Chemistry, but not Mathematics. What proportion took all three?&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;This can be addressed with the following diagram:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;[[File:Prob_Alevels.jpg|frameless|500px]]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;The shaded area contains the number who took all three, which can be deduced from the above information (since the total of the numbers assigned to each part of the Venn diagram must be &amp;lt;math&amp;gt;1000&amp;lt;/math&amp;gt;). The answer is therefore &amp;lt;math&amp;gt;30\%&amp;lt;/math&amp;gt; (being &amp;lt;math&amp;gt;300&amp;lt;/math&amp;gt; out of &amp;lt;math&amp;gt;1000&amp;lt;/math&amp;gt;).&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Two further results on unions, intersections and complements which are of use (and which are fairly easy to demonstrate using Venn diagrams) are &amp;#039;&amp;#039;&amp;#039;de Morgan Laws&amp;#039;&amp;#039;&amp;#039;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\left( \bar{A}\cap \bar{B}\right) =\left( \overline{A\cup B}\right) &amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\bar{A}\cup \bar{B}=\left( \overline{A\cap B}\right) &amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Additional resources =&lt;br /&gt;
&lt;br /&gt;
Khan Academy&lt;br /&gt;
&lt;br /&gt;
* Basic Probability and Venn Diagram [https://www.khanacademy.org/math/probability/independent-dependent-probability/addition_rule_probability/v/probability-with-playing-cards-and-venn-diagrams]&lt;br /&gt;
* Addition Rule [https://www.khanacademy.org/math/probability/independent-dependent-probability/addition_rule_probability/v/addition-rule-for-probability]&lt;br /&gt;
&lt;br /&gt;
= Footnotes =&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4279</id>
		<title>MATLAB</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4279"/>
				<updated>2022-02-08T08:43:07Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /*  Special Econometric Topics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;lt;div id=&amp;quot;Essential&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;The Essential MATLAB Programming Techniques ==&lt;br /&gt;
&lt;br /&gt;
In this section we will introduce a number of basic and intermediate programming techniques. Whatever language you program in you will encounter these techniques, although the details will, of course, vary. We recommend that you ensure that you are familiar with these before you progress to [[#SpecEcmtrTopics| Special Econometric Topics ]].&lt;br /&gt;
&lt;br /&gt;
=== Basic Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Basics and&amp;lt;br&amp;gt;Matrices&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Loading Data and&amp;lt;br&amp;gt;Date Formats&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Program Flow and&amp;lt;br&amp;gt;Logicals&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Saving Data and&amp;lt;br&amp;gt;Screen Output&lt;br /&gt;
|-&lt;br /&gt;
| [[Discussion]] &amp;lt;br&amp;gt; [http://www.youtube.com/watch?v=av5MgVpybT0&amp;amp;feature=youtu.be&amp;amp;hd=1 Example Clip]&lt;br /&gt;
| [[LoadingData|Discussion]] &amp;lt;br/&amp;gt;[http://youtu.be/jyb68zGM2ik?hd=1 ExampleClip]&lt;br /&gt;
| [[Program Flow and Logicals|Discussion]]&lt;br /&gt;
| [[Function|Discussion]] &amp;lt;br/&amp;gt; [[FctExampleCode|Example Code]] &amp;lt;br/&amp;gt; [[media:OLSexample.xls|OLSexample.xls]] &amp;lt;br&amp;gt; [http://youtu.be/FPw9DH8pfiU?hd=1 Example Clip]&lt;br /&gt;
| [[SavingData|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
After having gone through these basic techniques you may want to test your newly acquired skills with the following examples.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 1&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 2&lt;br /&gt;
|-&lt;br /&gt;
| [[Example 1]]&lt;br /&gt;
| [[Example 2|Example2a]]&amp;lt;br&amp;gt;[[Example 2b|Example2b]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Intermediate Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Statistical&amp;lt;br&amp;gt;Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Arrays and&amp;lt;br&amp;gt;Structures&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Debugging&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Graphing Data&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Function Handlers&amp;lt;br&amp;gt; Anonymous Functions&lt;br /&gt;
|-&lt;br /&gt;
| [[StatFunct|Discussion]]&lt;br /&gt;
| [[ArrayStructures|Discussion]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Graphing|Discussion]]&lt;br /&gt;
| [[Anonym|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Advanced Programming ===&lt;br /&gt;
&lt;br /&gt;
Sorry, but this cannot be taught! It will come with experience. Find someone who has experience in MATLAB programming and let him or her look over your code.&lt;br /&gt;
&lt;br /&gt;
== Nonlinear Optimisation ==&lt;br /&gt;
&lt;br /&gt;
The optimal parameters in a linear econometric model (assuming certain assumptions) can be found analytically. We call them the Ordinary Least Squares (OLS) estimates and they are easily calculated with a certain formula (see the [[FctExampleCode#OLSestm|OLSest.m]] function). When econometric models do not have such an analytical solution, an alternative parameter estimation strategy is required. In essence it is a clever &amp;quot;trial and error&amp;quot; strategy. This is often called nonlinear optimisation.&lt;br /&gt;
&lt;br /&gt;
Nonlinear optimisation is a very important, but also a very tricky area of econometric computing. It certainly helps to understand some of the underlying theory and therefore we have below separate sections on the theory and implementation.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Theory&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Implementation&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Constrained &amp;lt;br&amp;gt;Optimisation&lt;br /&gt;
|-&lt;br /&gt;
| [[NonlinOptTheory| Discussion]]&lt;br /&gt;
| [[NonlinOptImp| Discussion]]&lt;br /&gt;
| [[ConNonlinOptImp| Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;div id=&amp;quot;SpecEcmtrTopics&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; Special Econometric Topics ==&lt;br /&gt;
&lt;br /&gt;
Topics in this Section will assume that you have mastered all the techniques covered in the [[#Essential| Essential Programming Section ]]&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Robust standard&amp;lt;br&amp;gt;errors&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Univariate&amp;lt;br&amp;gt;Time Series&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Unit Root and&amp;lt;br&amp;gt;Stationarity Testing&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Forecasting&lt;br /&gt;
|-&lt;br /&gt;
| [[RobInf|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeOLShac|Example Code]]&lt;br /&gt;
| [[UniTS|Discussion]]&amp;lt;br&amp;gt;[[media:FXrateUSEU.xls|FXrateUSEU.xls]]&amp;lt;br&amp;gt;[[media:USGDP.xls|USGDP.xls]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Forecasting|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Maximum&amp;lt;br&amp;gt;Likelihood&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Generalized&amp;lt;br&amp;gt;Methods of Moments&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Instrumental&amp;lt;br&amp;gt;Variables&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Bayesian&amp;lt;br&amp;gt;Estimation&lt;br /&gt;
|-&lt;br /&gt;
| [[MaxLik|Discussion]]&amp;lt;br&amp;gt;[[MaxLikCode|Example Code]]&lt;br /&gt;
| [[GMM|2-step est]]&amp;lt;br&amp;gt;[[GMM_over|2-step est (overident)]]&amp;lt;br&amp;gt;[[numgrad_m|numgrad.m]]&amp;lt;br&amp;gt;[https://youtu.be/qwDPOomNG1c YouTube (1h 45min)] &amp;lt;br&amp;gt;[[media:US3monthRate.xlsx|US3monthRate.xlsx]]&lt;br /&gt;
| [[IV|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeIV|Example Code]]&lt;br /&gt;
| [[Bayes|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Monte-Carlo/&amp;lt;br&amp;gt;Simulation Techniques&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Binary Response&amp;lt;br&amp;gt;Models&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Handling High&amp;lt;br&amp;gt;Frequency Data&lt;br /&gt;
|-&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Other useful MATLAB resources ==&lt;br /&gt;
&lt;br /&gt;
=== The MATLAB Software ===&lt;br /&gt;
&lt;br /&gt;
The software is available on University of Manchester Computer Labs. If you make regular use of MATLAB you should consider purchasing your own software. The Student Version of MATLAB is available, for instance, from [http://www.amazon.co.uk/MATLAB-Simulink-Student-Version-R2014a/dp/0989614026/ref=sr_1_1?s=software&amp;amp;ie=UTF8&amp;amp;qid=1411983990&amp;amp;sr=1-1&amp;amp;keywords=matlab+2014 Amazon] for £66. This is a real bargain, considering that the equivalent non-discounted package would come in at about £4,000.&lt;br /&gt;
&lt;br /&gt;
=== Freely available toolboxes ===&lt;br /&gt;
&lt;br /&gt;
The following toolboxes are freely available and contain extremely useful procedures&lt;br /&gt;
&lt;br /&gt;
* Spatial Econometrics by James P. LeSage [http://www.spatial-econometrics.com/]. This toolbox contains a wide variety of useful econometrics functions. It also contains an excellent documentation. In addition to quite general econometric functions you will, as the name suggests, find a huge list of functions relevant if you are working with spatial data.&lt;br /&gt;
* &amp;lt;div id=&amp;quot;MFEtoolbox&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;Oxford MFE toolbox by Kevin Sheppard [https://bitbucket.org/kevinsheppard/mfe_toolbox]. Use the download link in the box on the right that starts with &amp;quot;Owner: Kevin Sheppard&amp;quot;. This toolbox contains many useful functions for uni- and multivariate volatility models.&lt;br /&gt;
&lt;br /&gt;
You need to copy these toolboxes into your MATLAB toolbox folder and add the respective path to the MATLAB list of folders it searches for functions. (In the main menu select FILE and then SET PATH where you can add the folders you added.) If you work on a computer for which you have no administrator rights, this strategy may not work. This [http://youtu.be/_32OqcW9WoY?hd=1 Example Clip] demonstrates what to do in that case. It is just a matter of adding one line into your code! Piece of cake.&lt;br /&gt;
&lt;br /&gt;
=== Literature and other learning resources ===&lt;br /&gt;
* [http://www.kevinsheppard.com/wiki/MFE_Toolbox: Kevin Sheppard&amp;#039;s MATLAB introduction].&lt;br /&gt;
* Martin V., Hurn S. and Harris D. (2012) Econometric Modelling with Time Series: Specification, Estimation and Testing (Themes in Modern Econometrics).[http://www.amazon.co.uk/Econometric-Modelling-Time-Specification-Econometrics/dp/0521196604/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1345214275&amp;amp;sr=1-1] This book contains an extensive library of relevant MATLAB codes.&lt;br /&gt;
* Higham, D.J. and Higham, N.J. (2005) MATLAB Guide, Society for Industrial and Applied Mathematics [http://www.amazon.co.uk/MATLAB-Guide-Desmond-J-Higham/dp/0898715784/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1347377409&amp;amp;sr=1-1]&lt;br /&gt;
This website does not cover any theoretical ground and is no substitute for any Econometric Textbook. There is a wide range of very good Econometric Textbooks available. If you are concerned about programming in MATLAB than you are likely to appreciate textbooks that use matrix notation. Here are two very good books that fit that bill:&lt;br /&gt;
* Heij C., de Boer P., Franses P.H., Kloek T. and van Dijk H.K (2004) Econometric Methods with Applications in Business and Economics, Oxford University Press, New York.[http://www.amazon.co.uk/Econometric-Methods-Applications-Business-Economics/dp/0199268010/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1354473313&amp;amp;sr=1-1]&lt;br /&gt;
* Greene W.H. (2012) Econometric Analysis, Pearson, Harlow.[http://www.amazon.co.uk/Econometric-Analysis-William-H-Greene/dp/0273753568/ref=sr_1_1?ie=UTF8&amp;amp;qid=1354473593&amp;amp;sr=8-1]&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=GMM_over&amp;diff=4278</id>
		<title>GMM over</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=GMM_over&amp;diff=4278"/>
				<updated>2022-02-08T08:41:24Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This code illustrates an overidentified (number of moments &amp;gt; number of parameters) 2-step GMM estimation.&lt;br /&gt;
This code requires numgrad.m in your working directory. This is a function to calculate numerical gradients.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
%=========================================================================&lt;br /&gt;
%&lt;br /&gt;
% Program to estimate level effect in interest rates by GMM&lt;br /&gt;
%&lt;br /&gt;
% Code based on Martin, Hurn and Harris, Econometric Time Series Modelling&lt;br /&gt;
% Specification, Estimation and Testing&lt;br /&gt;
% https://www.cambridge.org/features/econmodelling/chapter10.htm&lt;br /&gt;
%=========================================================================&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
clear all;&lt;br /&gt;
clc;&lt;br /&gt;
cd &amp;#039;C:\Users\YOUR_WORKING_DIRECTORY&amp;#039;&lt;br /&gt;
&lt;br /&gt;
% Load data --- monthly December 1946 to February 1991&lt;br /&gt;
%     3 month maturity&lt;br /&gt;
% extracted from the datafile provided by &lt;br /&gt;
% Martin, Hurn and Harris&lt;br /&gt;
% https://www.cambridge.org/features/econmodelling/chapter10.htm&lt;br /&gt;
&lt;br /&gt;
[rt, ~, ~] = xlsread(&amp;#039;US3monthRate.xlsx&amp;#039;);&lt;br /&gt;
&lt;br /&gt;
drt = trimr(rt,2,0) - trimr(rt,1,1); % creates \Delta r_{t+1}&lt;br /&gt;
r1t = trimr(rt,1,1);                 % creates r_t&lt;br /&gt;
r2t = trimr(rt,0,2);&lt;br /&gt;
t   = length(drt);&lt;br /&gt;
&lt;br /&gt;
%% It is typically good practice to visualise the data&lt;br /&gt;
tt = seqa(1946+12/12,1/12,t); % Creates year sequence&lt;br /&gt;
&lt;br /&gt;
subplot(1,2,1);&lt;br /&gt;
plot(tt,r1t);&lt;br /&gt;
title(&amp;#039;r_t&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(1,2,2);&lt;br /&gt;
plot(tt,drt);&lt;br /&gt;
title(&amp;#039;\Delta r_{t+1}&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
%% Estimate the model in the first stage with idendity weighting matrix&lt;br /&gt;
&lt;br /&gt;
ops  = optimset(&amp;#039;LargeScale&amp;#039;,&amp;#039;off&amp;#039;,&amp;#039;Display&amp;#039;,&amp;#039;off&amp;#039;);&lt;br /&gt;
b0   = [0.1;0.1;0.1;1.0];&lt;br /&gt;
w0 = eye(6);&lt;br /&gt;
bgmm1 = fminunc(@(b) qw(b,drt,r1t,r2t,w0),b0,ops);&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039;First Stage GMM estimates&amp;#039;)&lt;br /&gt;
disp(bgmm1)&lt;br /&gt;
&lt;br /&gt;
%% Now estimate the optimal weighting matrix&lt;br /&gt;
% using Newey-West estimator&lt;br /&gt;
lmax = 5;   % lag for the NW estimate &lt;br /&gt;
d = meqn(bgmm1,drt,r1t,r2t);&lt;br /&gt;
&lt;br /&gt;
% this will calculate Newey-West VCM using lmax lags&lt;br /&gt;
s   = d&amp;#039;*d;&lt;br /&gt;
tau = 1;&lt;br /&gt;
while tau &amp;lt;= lmax&lt;br /&gt;
    wtau = d((tau+1):size(d,1),:)&amp;#039;*d(1:(size(d,1)-tau),:);&lt;br /&gt;
    s    = s + (1.0-tau/(lmax+1))*(wtau + wtau&amp;#039;);&lt;br /&gt;
    tau  = tau + 1;&lt;br /&gt;
end&lt;br /&gt;
w1 = s./t;&lt;br /&gt;
&lt;br /&gt;
% Use this as the weighting matrix for the next pass to the optimisation&lt;br /&gt;
% function&lt;br /&gt;
&lt;br /&gt;
%% 2nd Stage &lt;br /&gt;
&lt;br /&gt;
bgmm2 = fminunc(@(b) qw(b,drt,r1t,r2t,w1),bgmm1,ops);&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039;Second Stage GMM estimates&amp;#039;)&lt;br /&gt;
disp(bgmm2)&lt;br /&gt;
&lt;br /&gt;
%% Further Iterations&lt;br /&gt;
% You could run further iterations&lt;br /&gt;
% 1) Re-calculate d&lt;br /&gt;
% 2) Re-calculate the optimal weighting matrix w based on the new d&lt;br /&gt;
% 3) Re-estimate using the new w&lt;br /&gt;
%&lt;br /&gt;
% For now we stop here&lt;br /&gt;
bgmm = bgmm2;&lt;br /&gt;
obj = qw(bgmm,drt,r1t,r2t,w1);&lt;br /&gt;
&lt;br /&gt;
%% Calculate standard errors&lt;br /&gt;
% Compute optimal weigthing matrix at GMM estimates&lt;br /&gt;
% using Newey-West estimator&lt;br /&gt;
lmax = 5;   % lag for the NW estimate &lt;br /&gt;
d = meqn(bgmm,drt,r1t,r2t);&lt;br /&gt;
&lt;br /&gt;
% this will calculate Newey-West VCM using lmax lags&lt;br /&gt;
s   = d&amp;#039;*d;&lt;br /&gt;
tau = 1;&lt;br /&gt;
while tau &amp;lt;= lmax&lt;br /&gt;
    wtau = d((tau+1):size(d,1),:)&amp;#039;*d(1:(size(d,1)-tau),:);&lt;br /&gt;
    s    = s + (1.0-tau/(lmax+1))*(wtau + wtau&amp;#039;);&lt;br /&gt;
    tau  = tau + 1;&lt;br /&gt;
end&lt;br /&gt;
s = s./t;&lt;br /&gt;
&lt;br /&gt;
% Compute standard errors of GMM estimates&lt;br /&gt;
dg = numgrad(@meaneqn,bgmm,drt,r1t,r2t);&lt;br /&gt;
v  = dg&amp;#039;*inv(s)*dg;&lt;br /&gt;
cov = inv(v)/t;&lt;br /&gt;
se = sqrt(diag(cov));&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
disp([&amp;#039;The value of the objective function  = &amp;#039;, num2str(obj) ]);&lt;br /&gt;
disp([&amp;#039;J-test                               = &amp;#039;, num2str(t*obj) ]);&lt;br /&gt;
disp(&amp;#039;Estimates     Std err.   t-stats&amp;#039;);&lt;br /&gt;
disp( [ bgmm  se  bgmm./se ])&lt;br /&gt;
disp([&amp;#039;Newey-West estimator with max lag    = &amp;#039;, num2str(lmax) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
%% Inference t-tests&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 0.0&lt;br /&gt;
stat = (bgmm(4) - 0.0)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=0.0) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 0.5&lt;br /&gt;
stat = (bgmm(4) - 0.5)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=0.5) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 1.0&lt;br /&gt;
stat = (bgmm(4) - 1.0)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=1.0) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 1.5&lt;br /&gt;
stat = (bgmm(4) - 1.5)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=1.5) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
%% Inference - Overidentifying restrictions&lt;br /&gt;
J = t*obj;&lt;br /&gt;
Jdof = size(s,1)-size(b0,1);&lt;br /&gt;
disp([&amp;#039;J-Test of overidentifying restrictions = &amp;#039;, num2str(J) ]);&lt;br /&gt;
disp([&amp;#039;p-value                                = &amp;#039;, num2str(1-chi2cdf(J,2)) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
%% Plot volatility function for alternative values of gam&lt;br /&gt;
tt = seqa(1946+12/12,1/12,t);&lt;br /&gt;
figure(1)&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,1);&lt;br /&gt;
plot(tt,drt./r1t.^0.0);&lt;br /&gt;
title(&amp;#039;$\gamma=0.0$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,2);&lt;br /&gt;
plot(tt,drt./r1t.^0.5);&lt;br /&gt;
title(&amp;#039;$\gamma=0.5$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,3);&lt;br /&gt;
plot(tt,drt./r1t.^1.0);&lt;br /&gt;
title(&amp;#039;$\gamma=1.0$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,4);&lt;br /&gt;
plot(tt,drt./r1t.^1.5);&lt;br /&gt;
title(&amp;#039;$\gamma=1.5$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
%&lt;br /&gt;
%------------------------- Functions -------------------------------------%&lt;br /&gt;
%&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% Define the moment equations &lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
function dt = meqn(b,drt,r1t,r2t)&lt;br /&gt;
    &lt;br /&gt;
        ut = drt - b(1) - b(2)*r1t;&lt;br /&gt;
        zt = [ones(size(ut,1),1),r1t,r2t];&lt;br /&gt;
        dt = repmat(ut,1,3).*zt;&lt;br /&gt;
        dt = [dt,repmat((ut.^2 - (b(3)^2)*r1t.^(2*b(4)) ),1,3).*zt];&lt;br /&gt;
   &lt;br /&gt;
end&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% Defines the mean of the moment conditions  &lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
function ret = meaneqn(b,drt,r1t,r2t)&lt;br /&gt;
&lt;br /&gt;
        ret = (mean(meqn(b,drt,r1t,r2t)))&amp;#039;;&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% GMM objective function with  user defined &lt;br /&gt;
% weighting matrix, w&lt;br /&gt;
%-------------------------------------------------------------------------%   &lt;br /&gt;
function ret = qw(b,drt,r1t,r2t,w)&lt;br /&gt;
        &lt;br /&gt;
    t = length(drt);&lt;br /&gt;
    d = meqn(b,drt,r1t,r2t);&lt;br /&gt;
    g = mean(d)&amp;#039;;&lt;br /&gt;
&lt;br /&gt;
    ret = g&amp;#039;*inv(w)*g;&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=GMM_over&amp;diff=4277</id>
		<title>GMM over</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=GMM_over&amp;diff=4277"/>
				<updated>2022-02-08T08:40:35Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This code illustrates an overidentified (number of moments &amp;gt; number of parameters) 2-step GMM estimation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
%=========================================================================&lt;br /&gt;
%&lt;br /&gt;
% Program to estimate level effect in interest rates by GMM&lt;br /&gt;
%&lt;br /&gt;
% Code based on Martin, Hurn and Harris, Econometric Time Series Modelling&lt;br /&gt;
% Specification, Estimation and Testing&lt;br /&gt;
% https://www.cambridge.org/features/econmodelling/chapter10.htm&lt;br /&gt;
%=========================================================================&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
clear all;&lt;br /&gt;
clc;&lt;br /&gt;
cd &amp;#039;C:\Users\YOUR_WORKING_DIRECTORY&amp;#039;&lt;br /&gt;
&lt;br /&gt;
% Load data --- monthly December 1946 to February 1991&lt;br /&gt;
%     3 month maturity&lt;br /&gt;
% extracted from the datafile provided by &lt;br /&gt;
% Martin, Hurn and Harris&lt;br /&gt;
% https://www.cambridge.org/features/econmodelling/chapter10.htm&lt;br /&gt;
&lt;br /&gt;
[rt, ~, ~] = xlsread(&amp;#039;US3monthRate.xlsx&amp;#039;);&lt;br /&gt;
&lt;br /&gt;
drt = trimr(rt,2,0) - trimr(rt,1,1); % creates \Delta r_{t+1}&lt;br /&gt;
r1t = trimr(rt,1,1);                 % creates r_t&lt;br /&gt;
r2t = trimr(rt,0,2);&lt;br /&gt;
t   = length(drt);&lt;br /&gt;
&lt;br /&gt;
%% It is typically good practice to visualise the data&lt;br /&gt;
tt = seqa(1946+12/12,1/12,t); % Creates year sequence&lt;br /&gt;
&lt;br /&gt;
subplot(1,2,1);&lt;br /&gt;
plot(tt,r1t);&lt;br /&gt;
title(&amp;#039;r_t&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(1,2,2);&lt;br /&gt;
plot(tt,drt);&lt;br /&gt;
title(&amp;#039;\Delta r_{t+1}&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
%% Estimate the model in the first stage with idendity weighting matrix&lt;br /&gt;
&lt;br /&gt;
ops  = optimset(&amp;#039;LargeScale&amp;#039;,&amp;#039;off&amp;#039;,&amp;#039;Display&amp;#039;,&amp;#039;off&amp;#039;);&lt;br /&gt;
b0   = [0.1;0.1;0.1;1.0];&lt;br /&gt;
w0 = eye(6);&lt;br /&gt;
bgmm1 = fminunc(@(b) qw(b,drt,r1t,r2t,w0),b0,ops);&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039;First Stage GMM estimates&amp;#039;)&lt;br /&gt;
disp(bgmm1)&lt;br /&gt;
&lt;br /&gt;
%% Now estimate the optimal weighting matrix&lt;br /&gt;
% using Newey-West estimator&lt;br /&gt;
lmax = 5;   % lag for the NW estimate &lt;br /&gt;
d = meqn(bgmm1,drt,r1t,r2t);&lt;br /&gt;
&lt;br /&gt;
% this will calculate Newey-West VCM using lmax lags&lt;br /&gt;
s   = d&amp;#039;*d;&lt;br /&gt;
tau = 1;&lt;br /&gt;
while tau &amp;lt;= lmax&lt;br /&gt;
    wtau = d((tau+1):size(d,1),:)&amp;#039;*d(1:(size(d,1)-tau),:);&lt;br /&gt;
    s    = s + (1.0-tau/(lmax+1))*(wtau + wtau&amp;#039;);&lt;br /&gt;
    tau  = tau + 1;&lt;br /&gt;
end&lt;br /&gt;
w1 = s./t;&lt;br /&gt;
&lt;br /&gt;
% Use this as the weighting matrix for the next pass to the optimisation&lt;br /&gt;
% function&lt;br /&gt;
&lt;br /&gt;
%% 2nd Stage &lt;br /&gt;
&lt;br /&gt;
bgmm2 = fminunc(@(b) qw(b,drt,r1t,r2t,w1),bgmm1,ops);&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039;Second Stage GMM estimates&amp;#039;)&lt;br /&gt;
disp(bgmm2)&lt;br /&gt;
&lt;br /&gt;
%% Further Iterations&lt;br /&gt;
% You could run further iterations&lt;br /&gt;
% 1) Re-calculate d&lt;br /&gt;
% 2) Re-calculate the optimal weighting matrix w based on the new d&lt;br /&gt;
% 3) Re-estimate using the new w&lt;br /&gt;
%&lt;br /&gt;
% For now we stop here&lt;br /&gt;
bgmm = bgmm2;&lt;br /&gt;
obj = qw(bgmm,drt,r1t,r2t,w1);&lt;br /&gt;
&lt;br /&gt;
%% Calculate standard errors&lt;br /&gt;
% Compute optimal weigthing matrix at GMM estimates&lt;br /&gt;
% using Newey-West estimator&lt;br /&gt;
lmax = 5;   % lag for the NW estimate &lt;br /&gt;
d = meqn(bgmm,drt,r1t,r2t);&lt;br /&gt;
&lt;br /&gt;
% this will calculate Newey-West VCM using lmax lags&lt;br /&gt;
s   = d&amp;#039;*d;&lt;br /&gt;
tau = 1;&lt;br /&gt;
while tau &amp;lt;= lmax&lt;br /&gt;
    wtau = d((tau+1):size(d,1),:)&amp;#039;*d(1:(size(d,1)-tau),:);&lt;br /&gt;
    s    = s + (1.0-tau/(lmax+1))*(wtau + wtau&amp;#039;);&lt;br /&gt;
    tau  = tau + 1;&lt;br /&gt;
end&lt;br /&gt;
s = s./t;&lt;br /&gt;
&lt;br /&gt;
% Compute standard errors of GMM estimates&lt;br /&gt;
dg = numgrad(@meaneqn,bgmm,drt,r1t,r2t);&lt;br /&gt;
v  = dg&amp;#039;*inv(s)*dg;&lt;br /&gt;
cov = inv(v)/t;&lt;br /&gt;
se = sqrt(diag(cov));&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
disp([&amp;#039;The value of the objective function  = &amp;#039;, num2str(obj) ]);&lt;br /&gt;
disp([&amp;#039;J-test                               = &amp;#039;, num2str(t*obj) ]);&lt;br /&gt;
disp(&amp;#039;Estimates     Std err.   t-stats&amp;#039;);&lt;br /&gt;
disp( [ bgmm  se  bgmm./se ])&lt;br /&gt;
disp([&amp;#039;Newey-West estimator with max lag    = &amp;#039;, num2str(lmax) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
%% Inference t-tests&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 0.0&lt;br /&gt;
stat = (bgmm(4) - 0.0)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=0.0) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 0.5&lt;br /&gt;
stat = (bgmm(4) - 0.5)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=0.5) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 1.0&lt;br /&gt;
stat = (bgmm(4) - 1.0)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=1.0) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 1.5&lt;br /&gt;
stat = (bgmm(4) - 1.5)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=1.5) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
%% Inference - Overidentifying restrictions&lt;br /&gt;
J = t*obj;&lt;br /&gt;
Jdof = size(s,1)-size(b0,1);&lt;br /&gt;
disp([&amp;#039;J-Test of overidentifying restrictions = &amp;#039;, num2str(J) ]);&lt;br /&gt;
disp([&amp;#039;p-value                                = &amp;#039;, num2str(1-chi2cdf(J,2)) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
%% Plot volatility function for alternative values of gam&lt;br /&gt;
tt = seqa(1946+12/12,1/12,t);&lt;br /&gt;
figure(1)&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,1);&lt;br /&gt;
plot(tt,drt./r1t.^0.0);&lt;br /&gt;
title(&amp;#039;$\gamma=0.0$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,2);&lt;br /&gt;
plot(tt,drt./r1t.^0.5);&lt;br /&gt;
title(&amp;#039;$\gamma=0.5$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,3);&lt;br /&gt;
plot(tt,drt./r1t.^1.0);&lt;br /&gt;
title(&amp;#039;$\gamma=1.0$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,4);&lt;br /&gt;
plot(tt,drt./r1t.^1.5);&lt;br /&gt;
title(&amp;#039;$\gamma=1.5$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
%&lt;br /&gt;
%------------------------- Functions -------------------------------------%&lt;br /&gt;
%&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% Define the moment equations &lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
function dt = meqn(b,drt,r1t,r2t)&lt;br /&gt;
    &lt;br /&gt;
        ut = drt - b(1) - b(2)*r1t;&lt;br /&gt;
        zt = [ones(size(ut,1),1),r1t,r2t];&lt;br /&gt;
        dt = repmat(ut,1,3).*zt;&lt;br /&gt;
        dt = [dt,repmat((ut.^2 - (b(3)^2)*r1t.^(2*b(4)) ),1,3).*zt];&lt;br /&gt;
   &lt;br /&gt;
end&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% Defines the mean of the moment conditions  &lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
function ret = meaneqn(b,drt,r1t,r2t)&lt;br /&gt;
&lt;br /&gt;
        ret = (mean(meqn(b,drt,r1t,r2t)))&amp;#039;;&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% GMM objective function with  user defined &lt;br /&gt;
% weighting matrix, w&lt;br /&gt;
%-------------------------------------------------------------------------%   &lt;br /&gt;
function ret = qw(b,drt,r1t,r2t,w)&lt;br /&gt;
        &lt;br /&gt;
    t = length(drt);&lt;br /&gt;
    d = meqn(b,drt,r1t,r2t);&lt;br /&gt;
    g = mean(d)&amp;#039;;&lt;br /&gt;
&lt;br /&gt;
    ret = g&amp;#039;*inv(w)*g;&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=GMM_over&amp;diff=4276</id>
		<title>GMM over</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=GMM_over&amp;diff=4276"/>
				<updated>2022-02-08T08:40:00Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: Created page with &amp;quot;This code illustrates an overidentified (number of moments &amp;gt; number of parameters) 2-step GMM estimation.   &amp;lt;source&amp;gt; %=========================================================...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This code illustrates an overidentified (number of moments &amp;gt; number of parameters) 2-step GMM estimation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
%=========================================================================&lt;br /&gt;
%&lt;br /&gt;
% Program to estimate level effect in interest rates by GMM&lt;br /&gt;
%&lt;br /&gt;
% Code based on Martin, Hurn and Harris, Econometric Time Series Modelling&lt;br /&gt;
% Specification, Estimation and Testing&lt;br /&gt;
% https://www.cambridge.org/features/econmodelling/chapter10.htm&lt;br /&gt;
%=========================================================================&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
clear all;&lt;br /&gt;
clc;&lt;br /&gt;
cd &amp;#039;C:\Users\msassrb2\Dropbox (The University of Manchester)\ECON80021\201920\GMM&amp;#039;&lt;br /&gt;
&lt;br /&gt;
% Load data --- monthly December 1946 to February 1991&lt;br /&gt;
%     3 month maturity&lt;br /&gt;
% extracted from the datafile provided by &lt;br /&gt;
% Martin, Hurn and Harris&lt;br /&gt;
% https://www.cambridge.org/features/econmodelling/chapter10.htm&lt;br /&gt;
&lt;br /&gt;
[rt, ~, ~] = xlsread(&amp;#039;US3monthRate.xlsx&amp;#039;);&lt;br /&gt;
&lt;br /&gt;
drt = trimr(rt,2,0) - trimr(rt,1,1); % creates \Delta r_{t+1}&lt;br /&gt;
r1t = trimr(rt,1,1);                 % creates r_t&lt;br /&gt;
r2t = trimr(rt,0,2);&lt;br /&gt;
t   = length(drt);&lt;br /&gt;
&lt;br /&gt;
%% It is typically good practice to visualise the data&lt;br /&gt;
tt = seqa(1946+12/12,1/12,t); % Creates year sequence&lt;br /&gt;
&lt;br /&gt;
subplot(1,2,1);&lt;br /&gt;
plot(tt,r1t);&lt;br /&gt;
title(&amp;#039;r_t&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(1,2,2);&lt;br /&gt;
plot(tt,drt);&lt;br /&gt;
title(&amp;#039;\Delta r_{t+1}&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
%% Estimate the model in the first stage with idendity weighting matrix&lt;br /&gt;
&lt;br /&gt;
ops  = optimset(&amp;#039;LargeScale&amp;#039;,&amp;#039;off&amp;#039;,&amp;#039;Display&amp;#039;,&amp;#039;off&amp;#039;);&lt;br /&gt;
b0   = [0.1;0.1;0.1;1.0];&lt;br /&gt;
w0 = eye(6);&lt;br /&gt;
bgmm1 = fminunc(@(b) qw(b,drt,r1t,r2t,w0),b0,ops);&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039;First Stage GMM estimates&amp;#039;)&lt;br /&gt;
disp(bgmm1)&lt;br /&gt;
&lt;br /&gt;
%% Now estimate the optimal weighting matrix&lt;br /&gt;
% using Newey-West estimator&lt;br /&gt;
lmax = 5;   % lag for the NW estimate &lt;br /&gt;
d = meqn(bgmm1,drt,r1t,r2t);&lt;br /&gt;
&lt;br /&gt;
% this will calculate Newey-West VCM using lmax lags&lt;br /&gt;
s   = d&amp;#039;*d;&lt;br /&gt;
tau = 1;&lt;br /&gt;
while tau &amp;lt;= lmax&lt;br /&gt;
    wtau = d((tau+1):size(d,1),:)&amp;#039;*d(1:(size(d,1)-tau),:);&lt;br /&gt;
    s    = s + (1.0-tau/(lmax+1))*(wtau + wtau&amp;#039;);&lt;br /&gt;
    tau  = tau + 1;&lt;br /&gt;
end&lt;br /&gt;
w1 = s./t;&lt;br /&gt;
&lt;br /&gt;
% Use this as the weighting matrix for the next pass to the optimisation&lt;br /&gt;
% function&lt;br /&gt;
&lt;br /&gt;
%% 2nd Stage &lt;br /&gt;
&lt;br /&gt;
bgmm2 = fminunc(@(b) qw(b,drt,r1t,r2t,w1),bgmm1,ops);&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039;Second Stage GMM estimates&amp;#039;)&lt;br /&gt;
disp(bgmm2)&lt;br /&gt;
&lt;br /&gt;
%% Further Iterations&lt;br /&gt;
% You could run further iterations&lt;br /&gt;
% 1) Re-calculate d&lt;br /&gt;
% 2) Re-calculate the optimal weighting matrix w based on the new d&lt;br /&gt;
% 3) Re-estimate using the new w&lt;br /&gt;
%&lt;br /&gt;
% For now we stop here&lt;br /&gt;
bgmm = bgmm2;&lt;br /&gt;
obj = qw(bgmm,drt,r1t,r2t,w1);&lt;br /&gt;
&lt;br /&gt;
%% Calculate standard errors&lt;br /&gt;
% Compute optimal weigthing matrix at GMM estimates&lt;br /&gt;
% using Newey-West estimator&lt;br /&gt;
lmax = 5;   % lag for the NW estimate &lt;br /&gt;
d = meqn(bgmm,drt,r1t,r2t);&lt;br /&gt;
&lt;br /&gt;
% this will calculate Newey-West VCM using lmax lags&lt;br /&gt;
s   = d&amp;#039;*d;&lt;br /&gt;
tau = 1;&lt;br /&gt;
while tau &amp;lt;= lmax&lt;br /&gt;
    wtau = d((tau+1):size(d,1),:)&amp;#039;*d(1:(size(d,1)-tau),:);&lt;br /&gt;
    s    = s + (1.0-tau/(lmax+1))*(wtau + wtau&amp;#039;);&lt;br /&gt;
    tau  = tau + 1;&lt;br /&gt;
end&lt;br /&gt;
s = s./t;&lt;br /&gt;
&lt;br /&gt;
% Compute standard errors of GMM estimates&lt;br /&gt;
dg = numgrad(@meaneqn,bgmm,drt,r1t,r2t);&lt;br /&gt;
v  = dg&amp;#039;*inv(s)*dg;&lt;br /&gt;
cov = inv(v)/t;&lt;br /&gt;
se = sqrt(diag(cov));&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
disp([&amp;#039;The value of the objective function  = &amp;#039;, num2str(obj) ]);&lt;br /&gt;
disp([&amp;#039;J-test                               = &amp;#039;, num2str(t*obj) ]);&lt;br /&gt;
disp(&amp;#039;Estimates     Std err.   t-stats&amp;#039;);&lt;br /&gt;
disp( [ bgmm  se  bgmm./se ])&lt;br /&gt;
disp([&amp;#039;Newey-West estimator with max lag    = &amp;#039;, num2str(lmax) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
%% Inference t-tests&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 0.0&lt;br /&gt;
stat = (bgmm(4) - 0.0)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=0.0) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 0.5&lt;br /&gt;
stat = (bgmm(4) - 0.5)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=0.5) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 1.0&lt;br /&gt;
stat = (bgmm(4) - 1.0)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=1.0) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 1.5&lt;br /&gt;
stat = (bgmm(4) - 1.5)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=1.5) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
%% Inference - Overidentifying restrictions&lt;br /&gt;
J = t*obj;&lt;br /&gt;
Jdof = size(s,1)-size(b0,1);&lt;br /&gt;
disp([&amp;#039;J-Test of overidentifying restrictions = &amp;#039;, num2str(J) ]);&lt;br /&gt;
disp([&amp;#039;p-value                                = &amp;#039;, num2str(1-chi2cdf(J,2)) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
%% Plot volatility function for alternative values of gam&lt;br /&gt;
tt = seqa(1946+12/12,1/12,t);&lt;br /&gt;
figure(1)&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,1);&lt;br /&gt;
plot(tt,drt./r1t.^0.0);&lt;br /&gt;
title(&amp;#039;$\gamma=0.0$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,2);&lt;br /&gt;
plot(tt,drt./r1t.^0.5);&lt;br /&gt;
title(&amp;#039;$\gamma=0.5$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,3);&lt;br /&gt;
plot(tt,drt./r1t.^1.0);&lt;br /&gt;
title(&amp;#039;$\gamma=1.0$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,4);&lt;br /&gt;
plot(tt,drt./r1t.^1.5);&lt;br /&gt;
title(&amp;#039;$\gamma=1.5$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
%&lt;br /&gt;
%------------------------- Functions -------------------------------------%&lt;br /&gt;
%&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% Define the moment equations &lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
function dt = meqn(b,drt,r1t,r2t)&lt;br /&gt;
    &lt;br /&gt;
        ut = drt - b(1) - b(2)*r1t;&lt;br /&gt;
        zt = [ones(size(ut,1),1),r1t,r2t];&lt;br /&gt;
        dt = repmat(ut,1,3).*zt;&lt;br /&gt;
        dt = [dt,repmat((ut.^2 - (b(3)^2)*r1t.^(2*b(4)) ),1,3).*zt];&lt;br /&gt;
   &lt;br /&gt;
end&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% Defines the mean of the moment conditions  &lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
function ret = meaneqn(b,drt,r1t,r2t)&lt;br /&gt;
&lt;br /&gt;
        ret = (mean(meqn(b,drt,r1t,r2t)))&amp;#039;;&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% GMM objective function with  user defined &lt;br /&gt;
% weighting matrix, w&lt;br /&gt;
%-------------------------------------------------------------------------%   &lt;br /&gt;
function ret = qw(b,drt,r1t,r2t,w)&lt;br /&gt;
        &lt;br /&gt;
    t = length(drt);&lt;br /&gt;
    d = meqn(b,drt,r1t,r2t);&lt;br /&gt;
    g = mean(d)&amp;#039;;&lt;br /&gt;
&lt;br /&gt;
    ret = g&amp;#039;*inv(w)*g;&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4275</id>
		<title>MATLAB</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4275"/>
				<updated>2022-02-08T08:38:41Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /*  Special Econometric Topics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;lt;div id=&amp;quot;Essential&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;The Essential MATLAB Programming Techniques ==&lt;br /&gt;
&lt;br /&gt;
In this section we will introduce a number of basic and intermediate programming techniques. Whatever language you program in you will encounter these techniques, although the details will, of course, vary. We recommend that you ensure that you are familiar with these before you progress to [[#SpecEcmtrTopics| Special Econometric Topics ]].&lt;br /&gt;
&lt;br /&gt;
=== Basic Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Basics and&amp;lt;br&amp;gt;Matrices&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Loading Data and&amp;lt;br&amp;gt;Date Formats&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Program Flow and&amp;lt;br&amp;gt;Logicals&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Saving Data and&amp;lt;br&amp;gt;Screen Output&lt;br /&gt;
|-&lt;br /&gt;
| [[Discussion]] &amp;lt;br&amp;gt; [http://www.youtube.com/watch?v=av5MgVpybT0&amp;amp;feature=youtu.be&amp;amp;hd=1 Example Clip]&lt;br /&gt;
| [[LoadingData|Discussion]] &amp;lt;br/&amp;gt;[http://youtu.be/jyb68zGM2ik?hd=1 ExampleClip]&lt;br /&gt;
| [[Program Flow and Logicals|Discussion]]&lt;br /&gt;
| [[Function|Discussion]] &amp;lt;br/&amp;gt; [[FctExampleCode|Example Code]] &amp;lt;br/&amp;gt; [[media:OLSexample.xls|OLSexample.xls]] &amp;lt;br&amp;gt; [http://youtu.be/FPw9DH8pfiU?hd=1 Example Clip]&lt;br /&gt;
| [[SavingData|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
After having gone through these basic techniques you may want to test your newly acquired skills with the following examples.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 1&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 2&lt;br /&gt;
|-&lt;br /&gt;
| [[Example 1]]&lt;br /&gt;
| [[Example 2|Example2a]]&amp;lt;br&amp;gt;[[Example 2b|Example2b]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Intermediate Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Statistical&amp;lt;br&amp;gt;Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Arrays and&amp;lt;br&amp;gt;Structures&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Debugging&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Graphing Data&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Function Handlers&amp;lt;br&amp;gt; Anonymous Functions&lt;br /&gt;
|-&lt;br /&gt;
| [[StatFunct|Discussion]]&lt;br /&gt;
| [[ArrayStructures|Discussion]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Graphing|Discussion]]&lt;br /&gt;
| [[Anonym|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Advanced Programming ===&lt;br /&gt;
&lt;br /&gt;
Sorry, but this cannot be taught! It will come with experience. Find someone who has experience in MATLAB programming and let him or her look over your code.&lt;br /&gt;
&lt;br /&gt;
== Nonlinear Optimisation ==&lt;br /&gt;
&lt;br /&gt;
The optimal parameters in a linear econometric model (assuming certain assumptions) can be found analytically. We call them the Ordinary Least Squares (OLS) estimates and they are easily calculated with a certain formula (see the [[FctExampleCode#OLSestm|OLSest.m]] function). When econometric models do not have such an analytical solution, an alternative parameter estimation strategy is required. In essence it is a clever &amp;quot;trial and error&amp;quot; strategy. This is often called nonlinear optimisation.&lt;br /&gt;
&lt;br /&gt;
Nonlinear optimisation is a very important, but also a very tricky area of econometric computing. It certainly helps to understand some of the underlying theory and therefore we have below separate sections on the theory and implementation.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Theory&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Implementation&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Constrained &amp;lt;br&amp;gt;Optimisation&lt;br /&gt;
|-&lt;br /&gt;
| [[NonlinOptTheory| Discussion]]&lt;br /&gt;
| [[NonlinOptImp| Discussion]]&lt;br /&gt;
| [[ConNonlinOptImp| Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;div id=&amp;quot;SpecEcmtrTopics&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; Special Econometric Topics ==&lt;br /&gt;
&lt;br /&gt;
Topics in this Section will assume that you have mastered all the techniques covered in the [[#Essential| Essential Programming Section ]]&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Robust standard&amp;lt;br&amp;gt;errors&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Univariate&amp;lt;br&amp;gt;Time Series&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Unit Root and&amp;lt;br&amp;gt;Stationarity Testing&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Forecasting&lt;br /&gt;
|-&lt;br /&gt;
| [[RobInf|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeOLShac|Example Code]]&lt;br /&gt;
| [[UniTS|Discussion]]&amp;lt;br&amp;gt;[[media:FXrateUSEU.xls|FXrateUSEU.xls]]&amp;lt;br&amp;gt;[[media:USGDP.xls|USGDP.xls]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Forecasting|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Maximum&amp;lt;br&amp;gt;Likelihood&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Generalized&amp;lt;br&amp;gt;Methods of Moments&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Instrumental&amp;lt;br&amp;gt;Variables&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Bayesian&amp;lt;br&amp;gt;Estimation&lt;br /&gt;
|-&lt;br /&gt;
| [[MaxLik|Discussion]]&amp;lt;br&amp;gt;[[MaxLikCode|Example Code]]&lt;br /&gt;
| [[GMM|2-step est]]&amp;lt;br&amp;gt;[[GMM_over|2-step est (overident)]]&amp;lt;br&amp;gt;[[numgrad_m|numgrad.m]]&amp;lt;br&amp;gt;[https://youtu.be/qwDPOomNG1c video] &amp;lt;br&amp;gt;[[media:US3monthRate.xlsx|US3monthRate.xlsx]]&lt;br /&gt;
| [[IV|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeIV|Example Code]]&lt;br /&gt;
| [[Bayes|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Monte-Carlo/&amp;lt;br&amp;gt;Simulation Techniques&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Binary Response&amp;lt;br&amp;gt;Models&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Handling High&amp;lt;br&amp;gt;Frequency Data&lt;br /&gt;
|-&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Other useful MATLAB resources ==&lt;br /&gt;
&lt;br /&gt;
=== The MATLAB Software ===&lt;br /&gt;
&lt;br /&gt;
The software is available on University of Manchester Computer Labs. If you make regular use of MATLAB you should consider purchasing your own software. The Student Version of MATLAB is available, for instance, from [http://www.amazon.co.uk/MATLAB-Simulink-Student-Version-R2014a/dp/0989614026/ref=sr_1_1?s=software&amp;amp;ie=UTF8&amp;amp;qid=1411983990&amp;amp;sr=1-1&amp;amp;keywords=matlab+2014 Amazon] for £66. This is a real bargain, considering that the equivalent non-discounted package would come in at about £4,000.&lt;br /&gt;
&lt;br /&gt;
=== Freely available toolboxes ===&lt;br /&gt;
&lt;br /&gt;
The following toolboxes are freely available and contain extremely useful procedures&lt;br /&gt;
&lt;br /&gt;
* Spatial Econometrics by James P. LeSage [http://www.spatial-econometrics.com/]. This toolbox contains a wide variety of useful econometrics functions. It also contains an excellent documentation. In addition to quite general econometric functions you will, as the name suggests, find a huge list of functions relevant if you are working with spatial data.&lt;br /&gt;
* &amp;lt;div id=&amp;quot;MFEtoolbox&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;Oxford MFE toolbox by Kevin Sheppard [https://bitbucket.org/kevinsheppard/mfe_toolbox]. Use the download link in the box on the right that starts with &amp;quot;Owner: Kevin Sheppard&amp;quot;. This toolbox contains many useful functions for uni- and multivariate volatility models.&lt;br /&gt;
&lt;br /&gt;
You need to copy these toolboxes into your MATLAB toolbox folder and add the respective path to the MATLAB list of folders it searches for functions. (In the main menu select FILE and then SET PATH where you can add the folders you added.) If you work on a computer for which you have no administrator rights, this strategy may not work. This [http://youtu.be/_32OqcW9WoY?hd=1 Example Clip] demonstrates what to do in that case. It is just a matter of adding one line into your code! Piece of cake.&lt;br /&gt;
&lt;br /&gt;
=== Literature and other learning resources ===&lt;br /&gt;
* [http://www.kevinsheppard.com/wiki/MFE_Toolbox: Kevin Sheppard&amp;#039;s MATLAB introduction].&lt;br /&gt;
* Martin V., Hurn S. and Harris D. (2012) Econometric Modelling with Time Series: Specification, Estimation and Testing (Themes in Modern Econometrics).[http://www.amazon.co.uk/Econometric-Modelling-Time-Specification-Econometrics/dp/0521196604/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1345214275&amp;amp;sr=1-1] This book contains an extensive library of relevant MATLAB codes.&lt;br /&gt;
* Higham, D.J. and Higham, N.J. (2005) MATLAB Guide, Society for Industrial and Applied Mathematics [http://www.amazon.co.uk/MATLAB-Guide-Desmond-J-Higham/dp/0898715784/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1347377409&amp;amp;sr=1-1]&lt;br /&gt;
This website does not cover any theoretical ground and is no substitute for any Econometric Textbook. There is a wide range of very good Econometric Textbooks available. If you are concerned about programming in MATLAB than you are likely to appreciate textbooks that use matrix notation. Here are two very good books that fit that bill:&lt;br /&gt;
* Heij C., de Boer P., Franses P.H., Kloek T. and van Dijk H.K (2004) Econometric Methods with Applications in Business and Economics, Oxford University Press, New York.[http://www.amazon.co.uk/Econometric-Methods-Applications-Business-Economics/dp/0199268010/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1354473313&amp;amp;sr=1-1]&lt;br /&gt;
* Greene W.H. (2012) Econometric Analysis, Pearson, Harlow.[http://www.amazon.co.uk/Econometric-Analysis-William-H-Greene/dp/0273753568/ref=sr_1_1?ie=UTF8&amp;amp;qid=1354473593&amp;amp;sr=8-1]&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=GMM&amp;diff=4274</id>
		<title>GMM</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=GMM&amp;diff=4274"/>
				<updated>2022-02-08T08:37:30Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The code below performs a standard 2 step GMM estimation.&lt;br /&gt;
In this code the estimation is exactly identified (number of moments = number of parameters).&lt;br /&gt;
This code requires numgrad.m in your working directory. This is a function to calculate numerical gradients.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
%=========================================================================&lt;br /&gt;
%&lt;br /&gt;
% Program to estimate level effect in interest rates by GMM&lt;br /&gt;
%&lt;br /&gt;
% Code based on Martin, Hurn and Harris, Econometric Time Series Modelling&lt;br /&gt;
% Specification, Estimation and Testing&lt;br /&gt;
% https://www.cambridge.org/features/econmodelling/chapter10.htm&lt;br /&gt;
% &lt;br /&gt;
% This code by Ralf Becker, March 2021&lt;br /&gt;
% http://eclr.humanities.manchester.ac.uk/index.php/MATLAB&lt;br /&gt;
%=========================================================================&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
clear all;&lt;br /&gt;
clc;&lt;br /&gt;
cd &amp;#039;YOUR DIRECTORY&amp;#039;&lt;br /&gt;
&lt;br /&gt;
% Load data --- monthly December 1946 to February 1991&lt;br /&gt;
%     3 month maturity&lt;br /&gt;
% extracted from the datafile provided by &lt;br /&gt;
% Martin, Hurn and Harris&lt;br /&gt;
% https://www.cambridge.org/features/econmodelling/chapter10.htm&lt;br /&gt;
&lt;br /&gt;
[rt, ~, ~] = xlsread(&amp;#039;US3monthRate.xlsx&amp;#039;);&lt;br /&gt;
&lt;br /&gt;
drt = trimr(rt,2,0) - trimr(rt,1,1); % creates \Delta r_{t+1}&lt;br /&gt;
r1t = trimr(rt,1,1);                 % creates r_t&lt;br /&gt;
r2t = trimr(rt,0,2);&lt;br /&gt;
t   = length(drt);&lt;br /&gt;
&lt;br /&gt;
%% It is typically good practice to visualise the data&lt;br /&gt;
tt = seqa(1946+12/12,1/12,t); % Creates year sequence&lt;br /&gt;
&lt;br /&gt;
subplot(1,2,1);&lt;br /&gt;
plot(tt,r1t);&lt;br /&gt;
title(&amp;#039;r_t&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(1,2,2);&lt;br /&gt;
plot(tt,drt);&lt;br /&gt;
title(&amp;#039;\Delta r_{t+1}&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
%% Estimate the model in the first stage with idendity weighting matrix&lt;br /&gt;
&lt;br /&gt;
ops  = optimset(&amp;#039;LargeScale&amp;#039;,&amp;#039;off&amp;#039;,&amp;#039;Display&amp;#039;,&amp;#039;off&amp;#039;);&lt;br /&gt;
b0   = [0.1;0.1;0.1;1.0];&lt;br /&gt;
w0 = eye(length(b0));&lt;br /&gt;
bgmm1 = fminunc(@(b) qw(b,drt,r1t,w0),b0,ops);&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039;First Stage GMM estimates&amp;#039;)&lt;br /&gt;
disp(bgmm1)&lt;br /&gt;
&lt;br /&gt;
%% Now estimate the optimal weighting matrix&lt;br /&gt;
% using Newey-West estimator&lt;br /&gt;
lmax = 5;   % lag for the NW estimate &lt;br /&gt;
d = meqn(bgmm1,drt,r1t);&lt;br /&gt;
&lt;br /&gt;
% this will calculate Newey-West VCM using lmax lags&lt;br /&gt;
s   = d&amp;#039;*d;&lt;br /&gt;
tau = 1;&lt;br /&gt;
while tau &amp;lt;= lmax&lt;br /&gt;
    wtau = d((tau+1):size(d,1),:)&amp;#039;*d(1:(size(d,1)-tau),:);&lt;br /&gt;
    s    = s + (1.0-tau/(lmax+1))*(wtau + wtau&amp;#039;);&lt;br /&gt;
    tau  = tau + 1;&lt;br /&gt;
end&lt;br /&gt;
w1 = s./t;&lt;br /&gt;
&lt;br /&gt;
% Use this as the weighting matrix for the next pass to the optimisation&lt;br /&gt;
% function&lt;br /&gt;
&lt;br /&gt;
%% 2nd Stage &lt;br /&gt;
&lt;br /&gt;
bgmm2 = fminunc(@(b) qw(b,drt,r1t,w1),bgmm1,ops);&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039;Second Stage GMM estimates&amp;#039;)&lt;br /&gt;
disp(bgmm2)&lt;br /&gt;
&lt;br /&gt;
%% Further Iterations&lt;br /&gt;
% You could run further iterations&lt;br /&gt;
% 1) Re-calculate d&lt;br /&gt;
% 2) Re-calculate the optimal weighting matrix w based on the new d&lt;br /&gt;
% 3) Re-estimate using the new w&lt;br /&gt;
%&lt;br /&gt;
% For now we stop here&lt;br /&gt;
bgmm = bgmm2;&lt;br /&gt;
obj = qw(bgmm,drt,r1t,w1);&lt;br /&gt;
&lt;br /&gt;
%% Calculate standard errors&lt;br /&gt;
% Compute optimal weigthing matrix at GMM estimates&lt;br /&gt;
% using Newey-West estimator&lt;br /&gt;
lmax = 5;   % lag for the NW estimate &lt;br /&gt;
d = meqn(bgmm,drt,r1t);&lt;br /&gt;
&lt;br /&gt;
% this will calculate Newey-West VCM using lmax lags&lt;br /&gt;
s   = d&amp;#039;*d;&lt;br /&gt;
tau = 1;&lt;br /&gt;
while tau &amp;lt;= lmax&lt;br /&gt;
    wtau = d((tau+1):size(d,1),:)&amp;#039;*d(1:(size(d,1)-tau),:);&lt;br /&gt;
    s    = s + (1.0-tau/(lmax+1))*(wtau + wtau&amp;#039;);&lt;br /&gt;
    tau  = tau + 1;&lt;br /&gt;
end&lt;br /&gt;
s = s./t;&lt;br /&gt;
&lt;br /&gt;
% Compute standard errors of GMM estimates&lt;br /&gt;
dg = numgrad(@meaneqn,bgmm,drt,r1t);&lt;br /&gt;
v  = dg&amp;#039;*inv(s)*dg;&lt;br /&gt;
cov = inv(v)/t;&lt;br /&gt;
se = sqrt(diag(cov));&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
disp([&amp;#039;The value of the objective function  = &amp;#039;, num2str(obj) ]);&lt;br /&gt;
disp([&amp;#039;J-test                               = &amp;#039;, num2str(t*obj) ]);&lt;br /&gt;
disp(&amp;#039;Estimates     Std err.   t-stats&amp;#039;);&lt;br /&gt;
disp( [ bgmm  se  bgmm./se ])&lt;br /&gt;
disp([&amp;#039;Newey-West estimator with max lag    = &amp;#039;, num2str(lmax) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
%% Inference t-tests&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 0.0&lt;br /&gt;
stat = (bgmm(4) - 0.0)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=0.0) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 0.5&lt;br /&gt;
stat = (bgmm(4) - 0.5)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=0.5) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 1.0&lt;br /&gt;
stat = (bgmm(4) - 1.0)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=1.0) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 1.5&lt;br /&gt;
stat = (bgmm(4) - 1.5)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=1.5) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
%% Inference - Overidentifying restrictions&lt;br /&gt;
&lt;br /&gt;
%% Plot volatility function for alternative values of gam&lt;br /&gt;
tt = seqa(1946+12/12,1/12,t);&lt;br /&gt;
figure(1)&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,1);&lt;br /&gt;
plot(tt,drt./r1t.^0.0);&lt;br /&gt;
title(&amp;#039;$\gamma=0.0$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,2);&lt;br /&gt;
plot(tt,drt./r1t.^0.5);&lt;br /&gt;
title(&amp;#039;$\gamma=0.5$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,3);&lt;br /&gt;
plot(tt,drt./r1t.^1.0);&lt;br /&gt;
title(&amp;#039;$\gamma=1.0$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,4);&lt;br /&gt;
plot(tt,drt./r1t.^1.5);&lt;br /&gt;
title(&amp;#039;$\gamma=1.5$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
%&lt;br /&gt;
%------------------------- Functions -------------------------------------%&lt;br /&gt;
%&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% Define the moment equations &lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
function dt = meqn(b,drt,r1t)&lt;br /&gt;
    &lt;br /&gt;
        ut = drt - b(1) - b(2)*r1t;&lt;br /&gt;
        zt = [ones(size(ut,1),1),r1t];&lt;br /&gt;
        dt = repmat(ut,1,2).*zt;&lt;br /&gt;
        dt = [dt,repmat((ut.^2 - (b(3)^2)*r1t.^(2*b(4)) ),1,2).*zt];&lt;br /&gt;
   &lt;br /&gt;
end&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% Defines the mean of the moment conditions  &lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
function ret = meaneqn(b,drt,r1t)&lt;br /&gt;
&lt;br /&gt;
        ret = (mean(meqn(b,drt,r1t)))&amp;#039;;&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% GMM objective function with  user defined &lt;br /&gt;
% weighting matrix, w&lt;br /&gt;
%-------------------------------------------------------------------------%   &lt;br /&gt;
function ret = qw(b,drt,r1t,w)&lt;br /&gt;
        &lt;br /&gt;
    t = length(drt);&lt;br /&gt;
    d = meqn(b,drt,r1t);&lt;br /&gt;
    g = mean(d)&amp;#039;;&lt;br /&gt;
&lt;br /&gt;
    ret = g&amp;#039;*inv(w)*g;&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Numgrad_m&amp;diff=4273</id>
		<title>Numgrad m</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Numgrad_m&amp;diff=4273"/>
				<updated>2022-02-08T08:36:47Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Save the code below as numgrad.m in your working folder. This is required to make the GMM code work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
%------------------------------------------------------------------------- &lt;br /&gt;
%   Computes numerical gradient at each observation&lt;br /&gt;
%-------------------------------------------------------------------------&lt;br /&gt;
function G = numgrad( f,x,varargin )&lt;br /&gt;
&lt;br /&gt;
    f0  = feval( f,x,varargin{:} );             % n by 1&lt;br /&gt;
    n   = length( f0 );&lt;br /&gt;
    k   = length( x );&lt;br /&gt;
    fdf = zeros( n,k );&lt;br /&gt;
 &lt;br /&gt;
    % Compute step size &lt;br /&gt;
    dx      = sqrt( eps )*( abs( x ) + eps );&lt;br /&gt;
    xh      = x + dx;&lt;br /&gt;
    dx      = xh - x;    &lt;br /&gt;
    ind     = dx &amp;lt; sqrt(eps);&lt;br /&gt;
    dx(ind) = sqrt(eps);&lt;br /&gt;
&lt;br /&gt;
    % Compute gradient&lt;br /&gt;
    xdx = bsxfun( @plus, diag( dx ), x );&lt;br /&gt;
    for i=1:k;&lt;br /&gt;
        &lt;br /&gt;
        fdf(:,i) = feval( f, xdx(:,i), varargin{:} );&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    G0 = repmat( f0, 1, k );                        % n by k        &lt;br /&gt;
    G1 = repmat( dx&amp;#039;, n, 1 );&lt;br /&gt;
    G  = ( fdf-G0 )./G1;&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Numgrad_m&amp;diff=4272</id>
		<title>Numgrad m</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Numgrad_m&amp;diff=4272"/>
				<updated>2022-02-08T08:36:06Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;source&amp;gt;&lt;br /&gt;
%------------------------------------------------------------------------- &lt;br /&gt;
%   Computes numerical gradient at each observation&lt;br /&gt;
%-------------------------------------------------------------------------&lt;br /&gt;
function G = numgrad( f,x,varargin )&lt;br /&gt;
&lt;br /&gt;
    f0  = feval( f,x,varargin{:} );             % n by 1&lt;br /&gt;
    n   = length( f0 );&lt;br /&gt;
    k   = length( x );&lt;br /&gt;
    fdf = zeros( n,k );&lt;br /&gt;
 &lt;br /&gt;
    % Compute step size &lt;br /&gt;
    dx      = sqrt( eps )*( abs( x ) + eps );&lt;br /&gt;
    xh      = x + dx;&lt;br /&gt;
    dx      = xh - x;    &lt;br /&gt;
    ind     = dx &amp;lt; sqrt(eps);&lt;br /&gt;
    dx(ind) = sqrt(eps);&lt;br /&gt;
&lt;br /&gt;
    % Compute gradient&lt;br /&gt;
    xdx = bsxfun( @plus, diag( dx ), x );&lt;br /&gt;
    for i=1:k;&lt;br /&gt;
        &lt;br /&gt;
        fdf(:,i) = feval( f, xdx(:,i), varargin{:} );&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    G0 = repmat( f0, 1, k );                        % n by k        &lt;br /&gt;
    G1 = repmat( dx&amp;#039;, n, 1 );&lt;br /&gt;
    G  = ( fdf-G0 )./G1;&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Numgrad_m&amp;diff=4271</id>
		<title>Numgrad m</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Numgrad_m&amp;diff=4271"/>
				<updated>2022-02-08T08:35:21Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: Created page with &amp;quot;&amp;lt;source&amp;gt; %-------------------------------------------------------------------------  %   Computes numerical gradient at each observation %-------------------------------------...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;source&amp;gt;&lt;br /&gt;
%------------------------------------------------------------------------- &lt;br /&gt;
%   Computes numerical gradient at each observation&lt;br /&gt;
%-------------------------------------------------------------------------&lt;br /&gt;
function G = numgrad( f,x,varargin )&lt;br /&gt;
&lt;br /&gt;
    f0  = feval( f,x,varargin{:} );             % n by 1&lt;br /&gt;
    n   = length( f0 );&lt;br /&gt;
    k   = length( x );&lt;br /&gt;
    fdf = zeros( n,k );&lt;br /&gt;
 &lt;br /&gt;
    % Compute step size &lt;br /&gt;
    dx      = sqrt( eps )*( abs( x ) + eps );&lt;br /&gt;
    xh      = x + dx;&lt;br /&gt;
    dx      = xh - x;    &lt;br /&gt;
    ind     = dx &amp;lt; sqrt(eps);&lt;br /&gt;
    dx(ind) = sqrt(eps);&lt;br /&gt;
&lt;br /&gt;
    % Compute gradient&lt;br /&gt;
    xdx = bsxfun( @plus, diag( dx ), x );&lt;br /&gt;
    for i=1:k;&lt;br /&gt;
        &lt;br /&gt;
        fdf(:,i) = feval( f, xdx(:,i), varargin{:} );&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
    G0 = repmat( f0, 1, k );                        % n by k        &lt;br /&gt;
    G1 = repmat( dx&amp;#039;, n, 1 );&lt;br /&gt;
    G  = ( fdf-G0 )./G1;&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;\source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4270</id>
		<title>MATLAB</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4270"/>
				<updated>2022-02-08T08:34:54Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /*  Special Econometric Topics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;lt;div id=&amp;quot;Essential&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;The Essential MATLAB Programming Techniques ==&lt;br /&gt;
&lt;br /&gt;
In this section we will introduce a number of basic and intermediate programming techniques. Whatever language you program in you will encounter these techniques, although the details will, of course, vary. We recommend that you ensure that you are familiar with these before you progress to [[#SpecEcmtrTopics| Special Econometric Topics ]].&lt;br /&gt;
&lt;br /&gt;
=== Basic Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Basics and&amp;lt;br&amp;gt;Matrices&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Loading Data and&amp;lt;br&amp;gt;Date Formats&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Program Flow and&amp;lt;br&amp;gt;Logicals&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Saving Data and&amp;lt;br&amp;gt;Screen Output&lt;br /&gt;
|-&lt;br /&gt;
| [[Discussion]] &amp;lt;br&amp;gt; [http://www.youtube.com/watch?v=av5MgVpybT0&amp;amp;feature=youtu.be&amp;amp;hd=1 Example Clip]&lt;br /&gt;
| [[LoadingData|Discussion]] &amp;lt;br/&amp;gt;[http://youtu.be/jyb68zGM2ik?hd=1 ExampleClip]&lt;br /&gt;
| [[Program Flow and Logicals|Discussion]]&lt;br /&gt;
| [[Function|Discussion]] &amp;lt;br/&amp;gt; [[FctExampleCode|Example Code]] &amp;lt;br/&amp;gt; [[media:OLSexample.xls|OLSexample.xls]] &amp;lt;br&amp;gt; [http://youtu.be/FPw9DH8pfiU?hd=1 Example Clip]&lt;br /&gt;
| [[SavingData|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
After having gone through these basic techniques you may want to test your newly acquired skills with the following examples.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 1&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 2&lt;br /&gt;
|-&lt;br /&gt;
| [[Example 1]]&lt;br /&gt;
| [[Example 2|Example2a]]&amp;lt;br&amp;gt;[[Example 2b|Example2b]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Intermediate Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Statistical&amp;lt;br&amp;gt;Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Arrays and&amp;lt;br&amp;gt;Structures&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Debugging&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Graphing Data&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Function Handlers&amp;lt;br&amp;gt; Anonymous Functions&lt;br /&gt;
|-&lt;br /&gt;
| [[StatFunct|Discussion]]&lt;br /&gt;
| [[ArrayStructures|Discussion]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Graphing|Discussion]]&lt;br /&gt;
| [[Anonym|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Advanced Programming ===&lt;br /&gt;
&lt;br /&gt;
Sorry, but this cannot be taught! It will come with experience. Find someone who has experience in MATLAB programming and let him or her look over your code.&lt;br /&gt;
&lt;br /&gt;
== Nonlinear Optimisation ==&lt;br /&gt;
&lt;br /&gt;
The optimal parameters in a linear econometric model (assuming certain assumptions) can be found analytically. We call them the Ordinary Least Squares (OLS) estimates and they are easily calculated with a certain formula (see the [[FctExampleCode#OLSestm|OLSest.m]] function). When econometric models do not have such an analytical solution, an alternative parameter estimation strategy is required. In essence it is a clever &amp;quot;trial and error&amp;quot; strategy. This is often called nonlinear optimisation.&lt;br /&gt;
&lt;br /&gt;
Nonlinear optimisation is a very important, but also a very tricky area of econometric computing. It certainly helps to understand some of the underlying theory and therefore we have below separate sections on the theory and implementation.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Theory&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Implementation&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Constrained &amp;lt;br&amp;gt;Optimisation&lt;br /&gt;
|-&lt;br /&gt;
| [[NonlinOptTheory| Discussion]]&lt;br /&gt;
| [[NonlinOptImp| Discussion]]&lt;br /&gt;
| [[ConNonlinOptImp| Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;div id=&amp;quot;SpecEcmtrTopics&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; Special Econometric Topics ==&lt;br /&gt;
&lt;br /&gt;
Topics in this Section will assume that you have mastered all the techniques covered in the [[#Essential| Essential Programming Section ]]&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Robust standard&amp;lt;br&amp;gt;errors&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Univariate&amp;lt;br&amp;gt;Time Series&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Unit Root and&amp;lt;br&amp;gt;Stationarity Testing&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Forecasting&lt;br /&gt;
|-&lt;br /&gt;
| [[RobInf|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeOLShac|Example Code]]&lt;br /&gt;
| [[UniTS|Discussion]]&amp;lt;br&amp;gt;[[media:FXrateUSEU.xls|FXrateUSEU.xls]]&amp;lt;br&amp;gt;[[media:USGDP.xls|USGDP.xls]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Forecasting|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Maximum&amp;lt;br&amp;gt;Likelihood&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Generalized&amp;lt;br&amp;gt;Methods of Moments&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Instrumental&amp;lt;br&amp;gt;Variables&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Bayesian&amp;lt;br&amp;gt;Estimation&lt;br /&gt;
|-&lt;br /&gt;
| [[MaxLik|Discussion]]&amp;lt;br&amp;gt;[[MaxLikCode|Example Code]]&lt;br /&gt;
| [[GMM|2-step est]]&amp;lt;br&amp;gt;[[numgrad_m|numgrad.m]]&amp;lt;br&amp;gt;[https://youtu.be/qwDPOomNG1c video] &amp;lt;br&amp;gt;[[media:US3monthRate.xlsx|US3monthRate.xlsx]]&amp;lt;br&amp;gt;[[media: gradp.m|gradp.m]]&lt;br /&gt;
| [[IV|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeIV|Example Code]]&lt;br /&gt;
| [[Bayes|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Monte-Carlo/&amp;lt;br&amp;gt;Simulation Techniques&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Binary Response&amp;lt;br&amp;gt;Models&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Handling High&amp;lt;br&amp;gt;Frequency Data&lt;br /&gt;
|-&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Other useful MATLAB resources ==&lt;br /&gt;
&lt;br /&gt;
=== The MATLAB Software ===&lt;br /&gt;
&lt;br /&gt;
The software is available on University of Manchester Computer Labs. If you make regular use of MATLAB you should consider purchasing your own software. The Student Version of MATLAB is available, for instance, from [http://www.amazon.co.uk/MATLAB-Simulink-Student-Version-R2014a/dp/0989614026/ref=sr_1_1?s=software&amp;amp;ie=UTF8&amp;amp;qid=1411983990&amp;amp;sr=1-1&amp;amp;keywords=matlab+2014 Amazon] for £66. This is a real bargain, considering that the equivalent non-discounted package would come in at about £4,000.&lt;br /&gt;
&lt;br /&gt;
=== Freely available toolboxes ===&lt;br /&gt;
&lt;br /&gt;
The following toolboxes are freely available and contain extremely useful procedures&lt;br /&gt;
&lt;br /&gt;
* Spatial Econometrics by James P. LeSage [http://www.spatial-econometrics.com/]. This toolbox contains a wide variety of useful econometrics functions. It also contains an excellent documentation. In addition to quite general econometric functions you will, as the name suggests, find a huge list of functions relevant if you are working with spatial data.&lt;br /&gt;
* &amp;lt;div id=&amp;quot;MFEtoolbox&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;Oxford MFE toolbox by Kevin Sheppard [https://bitbucket.org/kevinsheppard/mfe_toolbox]. Use the download link in the box on the right that starts with &amp;quot;Owner: Kevin Sheppard&amp;quot;. This toolbox contains many useful functions for uni- and multivariate volatility models.&lt;br /&gt;
&lt;br /&gt;
You need to copy these toolboxes into your MATLAB toolbox folder and add the respective path to the MATLAB list of folders it searches for functions. (In the main menu select FILE and then SET PATH where you can add the folders you added.) If you work on a computer for which you have no administrator rights, this strategy may not work. This [http://youtu.be/_32OqcW9WoY?hd=1 Example Clip] demonstrates what to do in that case. It is just a matter of adding one line into your code! Piece of cake.&lt;br /&gt;
&lt;br /&gt;
=== Literature and other learning resources ===&lt;br /&gt;
* [http://www.kevinsheppard.com/wiki/MFE_Toolbox: Kevin Sheppard&amp;#039;s MATLAB introduction].&lt;br /&gt;
* Martin V., Hurn S. and Harris D. (2012) Econometric Modelling with Time Series: Specification, Estimation and Testing (Themes in Modern Econometrics).[http://www.amazon.co.uk/Econometric-Modelling-Time-Specification-Econometrics/dp/0521196604/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1345214275&amp;amp;sr=1-1] This book contains an extensive library of relevant MATLAB codes.&lt;br /&gt;
* Higham, D.J. and Higham, N.J. (2005) MATLAB Guide, Society for Industrial and Applied Mathematics [http://www.amazon.co.uk/MATLAB-Guide-Desmond-J-Higham/dp/0898715784/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1347377409&amp;amp;sr=1-1]&lt;br /&gt;
This website does not cover any theoretical ground and is no substitute for any Econometric Textbook. There is a wide range of very good Econometric Textbooks available. If you are concerned about programming in MATLAB than you are likely to appreciate textbooks that use matrix notation. Here are two very good books that fit that bill:&lt;br /&gt;
* Heij C., de Boer P., Franses P.H., Kloek T. and van Dijk H.K (2004) Econometric Methods with Applications in Business and Economics, Oxford University Press, New York.[http://www.amazon.co.uk/Econometric-Methods-Applications-Business-Economics/dp/0199268010/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1354473313&amp;amp;sr=1-1]&lt;br /&gt;
* Greene W.H. (2012) Econometric Analysis, Pearson, Harlow.[http://www.amazon.co.uk/Econometric-Analysis-William-H-Greene/dp/0273753568/ref=sr_1_1?ie=UTF8&amp;amp;qid=1354473593&amp;amp;sr=8-1]&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4269</id>
		<title>MATLAB</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4269"/>
				<updated>2022-02-08T08:33:50Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /*  Special Econometric Topics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;lt;div id=&amp;quot;Essential&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;The Essential MATLAB Programming Techniques ==&lt;br /&gt;
&lt;br /&gt;
In this section we will introduce a number of basic and intermediate programming techniques. Whatever language you program in you will encounter these techniques, although the details will, of course, vary. We recommend that you ensure that you are familiar with these before you progress to [[#SpecEcmtrTopics| Special Econometric Topics ]].&lt;br /&gt;
&lt;br /&gt;
=== Basic Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Basics and&amp;lt;br&amp;gt;Matrices&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Loading Data and&amp;lt;br&amp;gt;Date Formats&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Program Flow and&amp;lt;br&amp;gt;Logicals&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Saving Data and&amp;lt;br&amp;gt;Screen Output&lt;br /&gt;
|-&lt;br /&gt;
| [[Discussion]] &amp;lt;br&amp;gt; [http://www.youtube.com/watch?v=av5MgVpybT0&amp;amp;feature=youtu.be&amp;amp;hd=1 Example Clip]&lt;br /&gt;
| [[LoadingData|Discussion]] &amp;lt;br/&amp;gt;[http://youtu.be/jyb68zGM2ik?hd=1 ExampleClip]&lt;br /&gt;
| [[Program Flow and Logicals|Discussion]]&lt;br /&gt;
| [[Function|Discussion]] &amp;lt;br/&amp;gt; [[FctExampleCode|Example Code]] &amp;lt;br/&amp;gt; [[media:OLSexample.xls|OLSexample.xls]] &amp;lt;br&amp;gt; [http://youtu.be/FPw9DH8pfiU?hd=1 Example Clip]&lt;br /&gt;
| [[SavingData|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
After having gone through these basic techniques you may want to test your newly acquired skills with the following examples.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 1&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 2&lt;br /&gt;
|-&lt;br /&gt;
| [[Example 1]]&lt;br /&gt;
| [[Example 2|Example2a]]&amp;lt;br&amp;gt;[[Example 2b|Example2b]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Intermediate Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Statistical&amp;lt;br&amp;gt;Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Arrays and&amp;lt;br&amp;gt;Structures&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Debugging&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Graphing Data&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Function Handlers&amp;lt;br&amp;gt; Anonymous Functions&lt;br /&gt;
|-&lt;br /&gt;
| [[StatFunct|Discussion]]&lt;br /&gt;
| [[ArrayStructures|Discussion]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Graphing|Discussion]]&lt;br /&gt;
| [[Anonym|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Advanced Programming ===&lt;br /&gt;
&lt;br /&gt;
Sorry, but this cannot be taught! It will come with experience. Find someone who has experience in MATLAB programming and let him or her look over your code.&lt;br /&gt;
&lt;br /&gt;
== Nonlinear Optimisation ==&lt;br /&gt;
&lt;br /&gt;
The optimal parameters in a linear econometric model (assuming certain assumptions) can be found analytically. We call them the Ordinary Least Squares (OLS) estimates and they are easily calculated with a certain formula (see the [[FctExampleCode#OLSestm|OLSest.m]] function). When econometric models do not have such an analytical solution, an alternative parameter estimation strategy is required. In essence it is a clever &amp;quot;trial and error&amp;quot; strategy. This is often called nonlinear optimisation.&lt;br /&gt;
&lt;br /&gt;
Nonlinear optimisation is a very important, but also a very tricky area of econometric computing. It certainly helps to understand some of the underlying theory and therefore we have below separate sections on the theory and implementation.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Theory&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Implementation&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Constrained &amp;lt;br&amp;gt;Optimisation&lt;br /&gt;
|-&lt;br /&gt;
| [[NonlinOptTheory| Discussion]]&lt;br /&gt;
| [[NonlinOptImp| Discussion]]&lt;br /&gt;
| [[ConNonlinOptImp| Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;div id=&amp;quot;SpecEcmtrTopics&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; Special Econometric Topics ==&lt;br /&gt;
&lt;br /&gt;
Topics in this Section will assume that you have mastered all the techniques covered in the [[#Essential| Essential Programming Section ]]&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Robust standard&amp;lt;br&amp;gt;errors&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Univariate&amp;lt;br&amp;gt;Time Series&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Unit Root and&amp;lt;br&amp;gt;Stationarity Testing&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Forecasting&lt;br /&gt;
|-&lt;br /&gt;
| [[RobInf|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeOLShac|Example Code]]&lt;br /&gt;
| [[UniTS|Discussion]]&amp;lt;br&amp;gt;[[media:FXrateUSEU.xls|FXrateUSEU.xls]]&amp;lt;br&amp;gt;[[media:USGDP.xls|USGDP.xls]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Forecasting|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Maximum&amp;lt;br&amp;gt;Likelihood&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Generalized&amp;lt;br&amp;gt;Methods of Moments&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Instrumental&amp;lt;br&amp;gt;Variables&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Bayesian&amp;lt;br&amp;gt;Estimation&lt;br /&gt;
|-&lt;br /&gt;
| [[MaxLik|Discussion]]&amp;lt;br&amp;gt;[[MaxLikCode|Example Code]]&lt;br /&gt;
| [[GMM|2-step est]]&amp;lt;br&amp;gt;[https://youtu.be/qwDPOomNG1c video] &amp;lt;br&amp;gt;[[media:US3monthRate.xlsx|US3monthRate.xlsx]]&amp;lt;br&amp;gt;[[media: gradp.m|gradp.m]]&lt;br /&gt;
| [[IV|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeIV|Example Code]]&lt;br /&gt;
| [[Bayes|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Monte-Carlo/&amp;lt;br&amp;gt;Simulation Techniques&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Binary Response&amp;lt;br&amp;gt;Models&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Handling High&amp;lt;br&amp;gt;Frequency Data&lt;br /&gt;
|-&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Other useful MATLAB resources ==&lt;br /&gt;
&lt;br /&gt;
=== The MATLAB Software ===&lt;br /&gt;
&lt;br /&gt;
The software is available on University of Manchester Computer Labs. If you make regular use of MATLAB you should consider purchasing your own software. The Student Version of MATLAB is available, for instance, from [http://www.amazon.co.uk/MATLAB-Simulink-Student-Version-R2014a/dp/0989614026/ref=sr_1_1?s=software&amp;amp;ie=UTF8&amp;amp;qid=1411983990&amp;amp;sr=1-1&amp;amp;keywords=matlab+2014 Amazon] for £66. This is a real bargain, considering that the equivalent non-discounted package would come in at about £4,000.&lt;br /&gt;
&lt;br /&gt;
=== Freely available toolboxes ===&lt;br /&gt;
&lt;br /&gt;
The following toolboxes are freely available and contain extremely useful procedures&lt;br /&gt;
&lt;br /&gt;
* Spatial Econometrics by James P. LeSage [http://www.spatial-econometrics.com/]. This toolbox contains a wide variety of useful econometrics functions. It also contains an excellent documentation. In addition to quite general econometric functions you will, as the name suggests, find a huge list of functions relevant if you are working with spatial data.&lt;br /&gt;
* &amp;lt;div id=&amp;quot;MFEtoolbox&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;Oxford MFE toolbox by Kevin Sheppard [https://bitbucket.org/kevinsheppard/mfe_toolbox]. Use the download link in the box on the right that starts with &amp;quot;Owner: Kevin Sheppard&amp;quot;. This toolbox contains many useful functions for uni- and multivariate volatility models.&lt;br /&gt;
&lt;br /&gt;
You need to copy these toolboxes into your MATLAB toolbox folder and add the respective path to the MATLAB list of folders it searches for functions. (In the main menu select FILE and then SET PATH where you can add the folders you added.) If you work on a computer for which you have no administrator rights, this strategy may not work. This [http://youtu.be/_32OqcW9WoY?hd=1 Example Clip] demonstrates what to do in that case. It is just a matter of adding one line into your code! Piece of cake.&lt;br /&gt;
&lt;br /&gt;
=== Literature and other learning resources ===&lt;br /&gt;
* [http://www.kevinsheppard.com/wiki/MFE_Toolbox: Kevin Sheppard&amp;#039;s MATLAB introduction].&lt;br /&gt;
* Martin V., Hurn S. and Harris D. (2012) Econometric Modelling with Time Series: Specification, Estimation and Testing (Themes in Modern Econometrics).[http://www.amazon.co.uk/Econometric-Modelling-Time-Specification-Econometrics/dp/0521196604/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1345214275&amp;amp;sr=1-1] This book contains an extensive library of relevant MATLAB codes.&lt;br /&gt;
* Higham, D.J. and Higham, N.J. (2005) MATLAB Guide, Society for Industrial and Applied Mathematics [http://www.amazon.co.uk/MATLAB-Guide-Desmond-J-Higham/dp/0898715784/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1347377409&amp;amp;sr=1-1]&lt;br /&gt;
This website does not cover any theoretical ground and is no substitute for any Econometric Textbook. There is a wide range of very good Econometric Textbooks available. If you are concerned about programming in MATLAB than you are likely to appreciate textbooks that use matrix notation. Here are two very good books that fit that bill:&lt;br /&gt;
* Heij C., de Boer P., Franses P.H., Kloek T. and van Dijk H.K (2004) Econometric Methods with Applications in Business and Economics, Oxford University Press, New York.[http://www.amazon.co.uk/Econometric-Methods-Applications-Business-Economics/dp/0199268010/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1354473313&amp;amp;sr=1-1]&lt;br /&gt;
* Greene W.H. (2012) Econometric Analysis, Pearson, Harlow.[http://www.amazon.co.uk/Econometric-Analysis-William-H-Greene/dp/0273753568/ref=sr_1_1?ie=UTF8&amp;amp;qid=1354473593&amp;amp;sr=8-1]&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4268</id>
		<title>MATLAB</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4268"/>
				<updated>2022-02-08T08:30:57Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /*  Special Econometric Topics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;lt;div id=&amp;quot;Essential&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;The Essential MATLAB Programming Techniques ==&lt;br /&gt;
&lt;br /&gt;
In this section we will introduce a number of basic and intermediate programming techniques. Whatever language you program in you will encounter these techniques, although the details will, of course, vary. We recommend that you ensure that you are familiar with these before you progress to [[#SpecEcmtrTopics| Special Econometric Topics ]].&lt;br /&gt;
&lt;br /&gt;
=== Basic Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Basics and&amp;lt;br&amp;gt;Matrices&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Loading Data and&amp;lt;br&amp;gt;Date Formats&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Program Flow and&amp;lt;br&amp;gt;Logicals&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Saving Data and&amp;lt;br&amp;gt;Screen Output&lt;br /&gt;
|-&lt;br /&gt;
| [[Discussion]] &amp;lt;br&amp;gt; [http://www.youtube.com/watch?v=av5MgVpybT0&amp;amp;feature=youtu.be&amp;amp;hd=1 Example Clip]&lt;br /&gt;
| [[LoadingData|Discussion]] &amp;lt;br/&amp;gt;[http://youtu.be/jyb68zGM2ik?hd=1 ExampleClip]&lt;br /&gt;
| [[Program Flow and Logicals|Discussion]]&lt;br /&gt;
| [[Function|Discussion]] &amp;lt;br/&amp;gt; [[FctExampleCode|Example Code]] &amp;lt;br/&amp;gt; [[media:OLSexample.xls|OLSexample.xls]] &amp;lt;br&amp;gt; [http://youtu.be/FPw9DH8pfiU?hd=1 Example Clip]&lt;br /&gt;
| [[SavingData|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
After having gone through these basic techniques you may want to test your newly acquired skills with the following examples.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 1&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 2&lt;br /&gt;
|-&lt;br /&gt;
| [[Example 1]]&lt;br /&gt;
| [[Example 2|Example2a]]&amp;lt;br&amp;gt;[[Example 2b|Example2b]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Intermediate Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Statistical&amp;lt;br&amp;gt;Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Arrays and&amp;lt;br&amp;gt;Structures&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Debugging&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Graphing Data&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Function Handlers&amp;lt;br&amp;gt; Anonymous Functions&lt;br /&gt;
|-&lt;br /&gt;
| [[StatFunct|Discussion]]&lt;br /&gt;
| [[ArrayStructures|Discussion]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Graphing|Discussion]]&lt;br /&gt;
| [[Anonym|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Advanced Programming ===&lt;br /&gt;
&lt;br /&gt;
Sorry, but this cannot be taught! It will come with experience. Find someone who has experience in MATLAB programming and let him or her look over your code.&lt;br /&gt;
&lt;br /&gt;
== Nonlinear Optimisation ==&lt;br /&gt;
&lt;br /&gt;
The optimal parameters in a linear econometric model (assuming certain assumptions) can be found analytically. We call them the Ordinary Least Squares (OLS) estimates and they are easily calculated with a certain formula (see the [[FctExampleCode#OLSestm|OLSest.m]] function). When econometric models do not have such an analytical solution, an alternative parameter estimation strategy is required. In essence it is a clever &amp;quot;trial and error&amp;quot; strategy. This is often called nonlinear optimisation.&lt;br /&gt;
&lt;br /&gt;
Nonlinear optimisation is a very important, but also a very tricky area of econometric computing. It certainly helps to understand some of the underlying theory and therefore we have below separate sections on the theory and implementation.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Theory&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Implementation&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Constrained &amp;lt;br&amp;gt;Optimisation&lt;br /&gt;
|-&lt;br /&gt;
| [[NonlinOptTheory| Discussion]]&lt;br /&gt;
| [[NonlinOptImp| Discussion]]&lt;br /&gt;
| [[ConNonlinOptImp| Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;div id=&amp;quot;SpecEcmtrTopics&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; Special Econometric Topics ==&lt;br /&gt;
&lt;br /&gt;
Topics in this Section will assume that you have mastered all the techniques covered in the [[#Essential| Essential Programming Section ]]&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Robust standard&amp;lt;br&amp;gt;errors&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Univariate&amp;lt;br&amp;gt;Time Series&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Unit Root and&amp;lt;br&amp;gt;Stationarity Testing&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Forecasting&lt;br /&gt;
|-&lt;br /&gt;
| [[RobInf|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeOLShac|Example Code]]&lt;br /&gt;
| [[UniTS|Discussion]]&amp;lt;br&amp;gt;[[media:FXrateUSEU.xls|FXrateUSEU.xls]]&amp;lt;br&amp;gt;[[media:USGDP.xls|USGDP.xls]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Forecasting|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Maximum&amp;lt;br&amp;gt;Likelihood&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Generalized&amp;lt;br&amp;gt;Methods of Moments&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Instrumental&amp;lt;br&amp;gt;Variables&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Bayesian&amp;lt;br&amp;gt;Estimation&lt;br /&gt;
|-&lt;br /&gt;
| [[MaxLik|Discussion]]&amp;lt;br&amp;gt;[[MaxLikCode|Example Code]]&lt;br /&gt;
| [[GMM|2-step est]]&amp;lt;br&amp;gt; [https://youtu.be/qwDPOomNG1c video] &amp;lt;br&amp;gt;[[media:US3monthRate.xlsx|US3monthRate.xlsx]]&amp;lt;br&amp;gt;[[media: gradp.m|gradp.m]]&lt;br /&gt;
| [[IV|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeIV|Example Code]]&lt;br /&gt;
| [[Bayes|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Monte-Carlo/&amp;lt;br&amp;gt;Simulation Techniques&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Binary Response&amp;lt;br&amp;gt;Models&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Handling High&amp;lt;br&amp;gt;Frequency Data&lt;br /&gt;
|-&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Other useful MATLAB resources ==&lt;br /&gt;
&lt;br /&gt;
=== The MATLAB Software ===&lt;br /&gt;
&lt;br /&gt;
The software is available on University of Manchester Computer Labs. If you make regular use of MATLAB you should consider purchasing your own software. The Student Version of MATLAB is available, for instance, from [http://www.amazon.co.uk/MATLAB-Simulink-Student-Version-R2014a/dp/0989614026/ref=sr_1_1?s=software&amp;amp;ie=UTF8&amp;amp;qid=1411983990&amp;amp;sr=1-1&amp;amp;keywords=matlab+2014 Amazon] for £66. This is a real bargain, considering that the equivalent non-discounted package would come in at about £4,000.&lt;br /&gt;
&lt;br /&gt;
=== Freely available toolboxes ===&lt;br /&gt;
&lt;br /&gt;
The following toolboxes are freely available and contain extremely useful procedures&lt;br /&gt;
&lt;br /&gt;
* Spatial Econometrics by James P. LeSage [http://www.spatial-econometrics.com/]. This toolbox contains a wide variety of useful econometrics functions. It also contains an excellent documentation. In addition to quite general econometric functions you will, as the name suggests, find a huge list of functions relevant if you are working with spatial data.&lt;br /&gt;
* &amp;lt;div id=&amp;quot;MFEtoolbox&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;Oxford MFE toolbox by Kevin Sheppard [https://bitbucket.org/kevinsheppard/mfe_toolbox]. Use the download link in the box on the right that starts with &amp;quot;Owner: Kevin Sheppard&amp;quot;. This toolbox contains many useful functions for uni- and multivariate volatility models.&lt;br /&gt;
&lt;br /&gt;
You need to copy these toolboxes into your MATLAB toolbox folder and add the respective path to the MATLAB list of folders it searches for functions. (In the main menu select FILE and then SET PATH where you can add the folders you added.) If you work on a computer for which you have no administrator rights, this strategy may not work. This [http://youtu.be/_32OqcW9WoY?hd=1 Example Clip] demonstrates what to do in that case. It is just a matter of adding one line into your code! Piece of cake.&lt;br /&gt;
&lt;br /&gt;
=== Literature and other learning resources ===&lt;br /&gt;
* [http://www.kevinsheppard.com/wiki/MFE_Toolbox: Kevin Sheppard&amp;#039;s MATLAB introduction].&lt;br /&gt;
* Martin V., Hurn S. and Harris D. (2012) Econometric Modelling with Time Series: Specification, Estimation and Testing (Themes in Modern Econometrics).[http://www.amazon.co.uk/Econometric-Modelling-Time-Specification-Econometrics/dp/0521196604/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1345214275&amp;amp;sr=1-1] This book contains an extensive library of relevant MATLAB codes.&lt;br /&gt;
* Higham, D.J. and Higham, N.J. (2005) MATLAB Guide, Society for Industrial and Applied Mathematics [http://www.amazon.co.uk/MATLAB-Guide-Desmond-J-Higham/dp/0898715784/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1347377409&amp;amp;sr=1-1]&lt;br /&gt;
This website does not cover any theoretical ground and is no substitute for any Econometric Textbook. There is a wide range of very good Econometric Textbooks available. If you are concerned about programming in MATLAB than you are likely to appreciate textbooks that use matrix notation. Here are two very good books that fit that bill:&lt;br /&gt;
* Heij C., de Boer P., Franses P.H., Kloek T. and van Dijk H.K (2004) Econometric Methods with Applications in Business and Economics, Oxford University Press, New York.[http://www.amazon.co.uk/Econometric-Methods-Applications-Business-Economics/dp/0199268010/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1354473313&amp;amp;sr=1-1]&lt;br /&gt;
* Greene W.H. (2012) Econometric Analysis, Pearson, Harlow.[http://www.amazon.co.uk/Econometric-Analysis-William-H-Greene/dp/0273753568/ref=sr_1_1?ie=UTF8&amp;amp;qid=1354473593&amp;amp;sr=8-1]&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4267</id>
		<title>MATLAB</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4267"/>
				<updated>2022-02-08T08:29:16Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /*  Special Econometric Topics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;lt;div id=&amp;quot;Essential&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;The Essential MATLAB Programming Techniques ==&lt;br /&gt;
&lt;br /&gt;
In this section we will introduce a number of basic and intermediate programming techniques. Whatever language you program in you will encounter these techniques, although the details will, of course, vary. We recommend that you ensure that you are familiar with these before you progress to [[#SpecEcmtrTopics| Special Econometric Topics ]].&lt;br /&gt;
&lt;br /&gt;
=== Basic Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Basics and&amp;lt;br&amp;gt;Matrices&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Loading Data and&amp;lt;br&amp;gt;Date Formats&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Program Flow and&amp;lt;br&amp;gt;Logicals&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Saving Data and&amp;lt;br&amp;gt;Screen Output&lt;br /&gt;
|-&lt;br /&gt;
| [[Discussion]] &amp;lt;br&amp;gt; [http://www.youtube.com/watch?v=av5MgVpybT0&amp;amp;feature=youtu.be&amp;amp;hd=1 Example Clip]&lt;br /&gt;
| [[LoadingData|Discussion]] &amp;lt;br/&amp;gt;[http://youtu.be/jyb68zGM2ik?hd=1 ExampleClip]&lt;br /&gt;
| [[Program Flow and Logicals|Discussion]]&lt;br /&gt;
| [[Function|Discussion]] &amp;lt;br/&amp;gt; [[FctExampleCode|Example Code]] &amp;lt;br/&amp;gt; [[media:OLSexample.xls|OLSexample.xls]] &amp;lt;br&amp;gt; [http://youtu.be/FPw9DH8pfiU?hd=1 Example Clip]&lt;br /&gt;
| [[SavingData|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
After having gone through these basic techniques you may want to test your newly acquired skills with the following examples.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 1&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 2&lt;br /&gt;
|-&lt;br /&gt;
| [[Example 1]]&lt;br /&gt;
| [[Example 2|Example2a]]&amp;lt;br&amp;gt;[[Example 2b|Example2b]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Intermediate Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Statistical&amp;lt;br&amp;gt;Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Arrays and&amp;lt;br&amp;gt;Structures&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Debugging&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Graphing Data&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Function Handlers&amp;lt;br&amp;gt; Anonymous Functions&lt;br /&gt;
|-&lt;br /&gt;
| [[StatFunct|Discussion]]&lt;br /&gt;
| [[ArrayStructures|Discussion]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Graphing|Discussion]]&lt;br /&gt;
| [[Anonym|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Advanced Programming ===&lt;br /&gt;
&lt;br /&gt;
Sorry, but this cannot be taught! It will come with experience. Find someone who has experience in MATLAB programming and let him or her look over your code.&lt;br /&gt;
&lt;br /&gt;
== Nonlinear Optimisation ==&lt;br /&gt;
&lt;br /&gt;
The optimal parameters in a linear econometric model (assuming certain assumptions) can be found analytically. We call them the Ordinary Least Squares (OLS) estimates and they are easily calculated with a certain formula (see the [[FctExampleCode#OLSestm|OLSest.m]] function). When econometric models do not have such an analytical solution, an alternative parameter estimation strategy is required. In essence it is a clever &amp;quot;trial and error&amp;quot; strategy. This is often called nonlinear optimisation.&lt;br /&gt;
&lt;br /&gt;
Nonlinear optimisation is a very important, but also a very tricky area of econometric computing. It certainly helps to understand some of the underlying theory and therefore we have below separate sections on the theory and implementation.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Theory&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Implementation&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Constrained &amp;lt;br&amp;gt;Optimisation&lt;br /&gt;
|-&lt;br /&gt;
| [[NonlinOptTheory| Discussion]]&lt;br /&gt;
| [[NonlinOptImp| Discussion]]&lt;br /&gt;
| [[ConNonlinOptImp| Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;div id=&amp;quot;SpecEcmtrTopics&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; Special Econometric Topics ==&lt;br /&gt;
&lt;br /&gt;
Topics in this Section will assume that you have mastered all the techniques covered in the [[#Essential| Essential Programming Section ]]&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Robust standard&amp;lt;br&amp;gt;errors&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Univariate&amp;lt;br&amp;gt;Time Series&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Unit Root and&amp;lt;br&amp;gt;Stationarity Testing&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Forecasting&lt;br /&gt;
|-&lt;br /&gt;
| [[RobInf|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeOLShac|Example Code]]&lt;br /&gt;
| [[UniTS|Discussion]]&amp;lt;br&amp;gt;[[media:FXrateUSEU.xls|FXrateUSEU.xls]]&amp;lt;br&amp;gt;[[media:USGDP.xls|USGDP.xls]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Forecasting|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Maximum&amp;lt;br&amp;gt;Likelihood&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Generalized&amp;lt;br&amp;gt;Methods of Moments&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Instrumental&amp;lt;br&amp;gt;Variables&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Bayesian&amp;lt;br&amp;gt;Estimation&lt;br /&gt;
|-&lt;br /&gt;
| [[MaxLik|Discussion]]&amp;lt;br&amp;gt;[[MaxLikCode|Example Code]]&lt;br /&gt;
| [[GMM|2-step est]]&amp;lt;br&amp;gt; [https://youtu.be/NeUiDYr3ML0 video] &amp;lt;br&amp;gt;[[media:US3monthRate.xlsx|US3monthRate.xlsx]]&amp;lt;br&amp;gt;[[media: gradp.m|gradp.m]]&lt;br /&gt;
| [[IV|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeIV|Example Code]]&lt;br /&gt;
| [[Bayes|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Monte-Carlo/&amp;lt;br&amp;gt;Simulation Techniques&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Binary Response&amp;lt;br&amp;gt;Models&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Handling High&amp;lt;br&amp;gt;Frequency Data&lt;br /&gt;
|-&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Other useful MATLAB resources ==&lt;br /&gt;
&lt;br /&gt;
=== The MATLAB Software ===&lt;br /&gt;
&lt;br /&gt;
The software is available on University of Manchester Computer Labs. If you make regular use of MATLAB you should consider purchasing your own software. The Student Version of MATLAB is available, for instance, from [http://www.amazon.co.uk/MATLAB-Simulink-Student-Version-R2014a/dp/0989614026/ref=sr_1_1?s=software&amp;amp;ie=UTF8&amp;amp;qid=1411983990&amp;amp;sr=1-1&amp;amp;keywords=matlab+2014 Amazon] for £66. This is a real bargain, considering that the equivalent non-discounted package would come in at about £4,000.&lt;br /&gt;
&lt;br /&gt;
=== Freely available toolboxes ===&lt;br /&gt;
&lt;br /&gt;
The following toolboxes are freely available and contain extremely useful procedures&lt;br /&gt;
&lt;br /&gt;
* Spatial Econometrics by James P. LeSage [http://www.spatial-econometrics.com/]. This toolbox contains a wide variety of useful econometrics functions. It also contains an excellent documentation. In addition to quite general econometric functions you will, as the name suggests, find a huge list of functions relevant if you are working with spatial data.&lt;br /&gt;
* &amp;lt;div id=&amp;quot;MFEtoolbox&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;Oxford MFE toolbox by Kevin Sheppard [https://bitbucket.org/kevinsheppard/mfe_toolbox]. Use the download link in the box on the right that starts with &amp;quot;Owner: Kevin Sheppard&amp;quot;. This toolbox contains many useful functions for uni- and multivariate volatility models.&lt;br /&gt;
&lt;br /&gt;
You need to copy these toolboxes into your MATLAB toolbox folder and add the respective path to the MATLAB list of folders it searches for functions. (In the main menu select FILE and then SET PATH where you can add the folders you added.) If you work on a computer for which you have no administrator rights, this strategy may not work. This [http://youtu.be/_32OqcW9WoY?hd=1 Example Clip] demonstrates what to do in that case. It is just a matter of adding one line into your code! Piece of cake.&lt;br /&gt;
&lt;br /&gt;
=== Literature and other learning resources ===&lt;br /&gt;
* [http://www.kevinsheppard.com/wiki/MFE_Toolbox: Kevin Sheppard&amp;#039;s MATLAB introduction].&lt;br /&gt;
* Martin V., Hurn S. and Harris D. (2012) Econometric Modelling with Time Series: Specification, Estimation and Testing (Themes in Modern Econometrics).[http://www.amazon.co.uk/Econometric-Modelling-Time-Specification-Econometrics/dp/0521196604/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1345214275&amp;amp;sr=1-1] This book contains an extensive library of relevant MATLAB codes.&lt;br /&gt;
* Higham, D.J. and Higham, N.J. (2005) MATLAB Guide, Society for Industrial and Applied Mathematics [http://www.amazon.co.uk/MATLAB-Guide-Desmond-J-Higham/dp/0898715784/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1347377409&amp;amp;sr=1-1]&lt;br /&gt;
This website does not cover any theoretical ground and is no substitute for any Econometric Textbook. There is a wide range of very good Econometric Textbooks available. If you are concerned about programming in MATLAB than you are likely to appreciate textbooks that use matrix notation. Here are two very good books that fit that bill:&lt;br /&gt;
* Heij C., de Boer P., Franses P.H., Kloek T. and van Dijk H.K (2004) Econometric Methods with Applications in Business and Economics, Oxford University Press, New York.[http://www.amazon.co.uk/Econometric-Methods-Applications-Business-Economics/dp/0199268010/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1354473313&amp;amp;sr=1-1]&lt;br /&gt;
* Greene W.H. (2012) Econometric Analysis, Pearson, Harlow.[http://www.amazon.co.uk/Econometric-Analysis-William-H-Greene/dp/0273753568/ref=sr_1_1?ie=UTF8&amp;amp;qid=1354473593&amp;amp;sr=8-1]&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=GMM&amp;diff=4266</id>
		<title>GMM</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=GMM&amp;diff=4266"/>
				<updated>2022-02-08T08:25:57Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The code below performs a standard 2 step GMM estimation.&lt;br /&gt;
In this code the estimation is exactly identified (number of moments = number of parameters)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
%=========================================================================&lt;br /&gt;
%&lt;br /&gt;
% Program to estimate level effect in interest rates by GMM&lt;br /&gt;
%&lt;br /&gt;
% Code based on Martin, Hurn and Harris, Econometric Time Series Modelling&lt;br /&gt;
% Specification, Estimation and Testing&lt;br /&gt;
% https://www.cambridge.org/features/econmodelling/chapter10.htm&lt;br /&gt;
% &lt;br /&gt;
% This code by Ralf Becker, March 2021&lt;br /&gt;
% http://eclr.humanities.manchester.ac.uk/index.php/MATLAB&lt;br /&gt;
%=========================================================================&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
clear all;&lt;br /&gt;
clc;&lt;br /&gt;
cd &amp;#039;YOUR DIRECTORY&amp;#039;&lt;br /&gt;
&lt;br /&gt;
% Load data --- monthly December 1946 to February 1991&lt;br /&gt;
%     3 month maturity&lt;br /&gt;
% extracted from the datafile provided by &lt;br /&gt;
% Martin, Hurn and Harris&lt;br /&gt;
% https://www.cambridge.org/features/econmodelling/chapter10.htm&lt;br /&gt;
&lt;br /&gt;
[rt, ~, ~] = xlsread(&amp;#039;US3monthRate.xlsx&amp;#039;);&lt;br /&gt;
&lt;br /&gt;
drt = trimr(rt,2,0) - trimr(rt,1,1); % creates \Delta r_{t+1}&lt;br /&gt;
r1t = trimr(rt,1,1);                 % creates r_t&lt;br /&gt;
r2t = trimr(rt,0,2);&lt;br /&gt;
t   = length(drt);&lt;br /&gt;
&lt;br /&gt;
%% It is typically good practice to visualise the data&lt;br /&gt;
tt = seqa(1946+12/12,1/12,t); % Creates year sequence&lt;br /&gt;
&lt;br /&gt;
subplot(1,2,1);&lt;br /&gt;
plot(tt,r1t);&lt;br /&gt;
title(&amp;#039;r_t&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(1,2,2);&lt;br /&gt;
plot(tt,drt);&lt;br /&gt;
title(&amp;#039;\Delta r_{t+1}&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
%% Estimate the model in the first stage with idendity weighting matrix&lt;br /&gt;
&lt;br /&gt;
ops  = optimset(&amp;#039;LargeScale&amp;#039;,&amp;#039;off&amp;#039;,&amp;#039;Display&amp;#039;,&amp;#039;off&amp;#039;);&lt;br /&gt;
b0   = [0.1;0.1;0.1;1.0];&lt;br /&gt;
w0 = eye(length(b0));&lt;br /&gt;
bgmm1 = fminunc(@(b) qw(b,drt,r1t,w0),b0,ops);&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039;First Stage GMM estimates&amp;#039;)&lt;br /&gt;
disp(bgmm1)&lt;br /&gt;
&lt;br /&gt;
%% Now estimate the optimal weighting matrix&lt;br /&gt;
% using Newey-West estimator&lt;br /&gt;
lmax = 5;   % lag for the NW estimate &lt;br /&gt;
d = meqn(bgmm1,drt,r1t);&lt;br /&gt;
&lt;br /&gt;
% this will calculate Newey-West VCM using lmax lags&lt;br /&gt;
s   = d&amp;#039;*d;&lt;br /&gt;
tau = 1;&lt;br /&gt;
while tau &amp;lt;= lmax&lt;br /&gt;
    wtau = d((tau+1):size(d,1),:)&amp;#039;*d(1:(size(d,1)-tau),:);&lt;br /&gt;
    s    = s + (1.0-tau/(lmax+1))*(wtau + wtau&amp;#039;);&lt;br /&gt;
    tau  = tau + 1;&lt;br /&gt;
end&lt;br /&gt;
w1 = s./t;&lt;br /&gt;
&lt;br /&gt;
% Use this as the weighting matrix for the next pass to the optimisation&lt;br /&gt;
% function&lt;br /&gt;
&lt;br /&gt;
%% 2nd Stage &lt;br /&gt;
&lt;br /&gt;
bgmm2 = fminunc(@(b) qw(b,drt,r1t,w1),bgmm1,ops);&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039;Second Stage GMM estimates&amp;#039;)&lt;br /&gt;
disp(bgmm2)&lt;br /&gt;
&lt;br /&gt;
%% Further Iterations&lt;br /&gt;
% You could run further iterations&lt;br /&gt;
% 1) Re-calculate d&lt;br /&gt;
% 2) Re-calculate the optimal weighting matrix w based on the new d&lt;br /&gt;
% 3) Re-estimate using the new w&lt;br /&gt;
%&lt;br /&gt;
% For now we stop here&lt;br /&gt;
bgmm = bgmm2;&lt;br /&gt;
obj = qw(bgmm,drt,r1t,w1);&lt;br /&gt;
&lt;br /&gt;
%% Calculate standard errors&lt;br /&gt;
% Compute optimal weigthing matrix at GMM estimates&lt;br /&gt;
% using Newey-West estimator&lt;br /&gt;
lmax = 5;   % lag for the NW estimate &lt;br /&gt;
d = meqn(bgmm,drt,r1t);&lt;br /&gt;
&lt;br /&gt;
% this will calculate Newey-West VCM using lmax lags&lt;br /&gt;
s   = d&amp;#039;*d;&lt;br /&gt;
tau = 1;&lt;br /&gt;
while tau &amp;lt;= lmax&lt;br /&gt;
    wtau = d((tau+1):size(d,1),:)&amp;#039;*d(1:(size(d,1)-tau),:);&lt;br /&gt;
    s    = s + (1.0-tau/(lmax+1))*(wtau + wtau&amp;#039;);&lt;br /&gt;
    tau  = tau + 1;&lt;br /&gt;
end&lt;br /&gt;
s = s./t;&lt;br /&gt;
&lt;br /&gt;
% Compute standard errors of GMM estimates&lt;br /&gt;
dg = numgrad(@meaneqn,bgmm,drt,r1t);&lt;br /&gt;
v  = dg&amp;#039;*inv(s)*dg;&lt;br /&gt;
cov = inv(v)/t;&lt;br /&gt;
se = sqrt(diag(cov));&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
disp([&amp;#039;The value of the objective function  = &amp;#039;, num2str(obj) ]);&lt;br /&gt;
disp([&amp;#039;J-test                               = &amp;#039;, num2str(t*obj) ]);&lt;br /&gt;
disp(&amp;#039;Estimates     Std err.   t-stats&amp;#039;);&lt;br /&gt;
disp( [ bgmm  se  bgmm./se ])&lt;br /&gt;
disp([&amp;#039;Newey-West estimator with max lag    = &amp;#039;, num2str(lmax) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
%% Inference t-tests&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 0.0&lt;br /&gt;
stat = (bgmm(4) - 0.0)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=0.0) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 0.5&lt;br /&gt;
stat = (bgmm(4) - 0.5)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=0.5) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 1.0&lt;br /&gt;
stat = (bgmm(4) - 1.0)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=1.0) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 1.5&lt;br /&gt;
stat = (bgmm(4) - 1.5)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=1.5) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
%% Inference - Overidentifying restrictions&lt;br /&gt;
&lt;br /&gt;
%% Plot volatility function for alternative values of gam&lt;br /&gt;
tt = seqa(1946+12/12,1/12,t);&lt;br /&gt;
figure(1)&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,1);&lt;br /&gt;
plot(tt,drt./r1t.^0.0);&lt;br /&gt;
title(&amp;#039;$\gamma=0.0$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,2);&lt;br /&gt;
plot(tt,drt./r1t.^0.5);&lt;br /&gt;
title(&amp;#039;$\gamma=0.5$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,3);&lt;br /&gt;
plot(tt,drt./r1t.^1.0);&lt;br /&gt;
title(&amp;#039;$\gamma=1.0$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,4);&lt;br /&gt;
plot(tt,drt./r1t.^1.5);&lt;br /&gt;
title(&amp;#039;$\gamma=1.5$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
%&lt;br /&gt;
%------------------------- Functions -------------------------------------%&lt;br /&gt;
%&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% Define the moment equations &lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
function dt = meqn(b,drt,r1t)&lt;br /&gt;
    &lt;br /&gt;
        ut = drt - b(1) - b(2)*r1t;&lt;br /&gt;
        zt = [ones(size(ut,1),1),r1t];&lt;br /&gt;
        dt = repmat(ut,1,2).*zt;&lt;br /&gt;
        dt = [dt,repmat((ut.^2 - (b(3)^2)*r1t.^(2*b(4)) ),1,2).*zt];&lt;br /&gt;
   &lt;br /&gt;
end&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% Defines the mean of the moment conditions  &lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
function ret = meaneqn(b,drt,r1t)&lt;br /&gt;
&lt;br /&gt;
        ret = (mean(meqn(b,drt,r1t)))&amp;#039;;&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% GMM objective function with  user defined &lt;br /&gt;
% weighting matrix, w&lt;br /&gt;
%-------------------------------------------------------------------------%   &lt;br /&gt;
function ret = qw(b,drt,r1t,w)&lt;br /&gt;
        &lt;br /&gt;
    t = length(drt);&lt;br /&gt;
    d = meqn(b,drt,r1t);&lt;br /&gt;
    g = mean(d)&amp;#039;;&lt;br /&gt;
&lt;br /&gt;
    ret = g&amp;#039;*inv(w)*g;&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=GMM&amp;diff=4265</id>
		<title>GMM</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=GMM&amp;diff=4265"/>
				<updated>2022-02-08T08:24:56Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;source&amp;gt;&lt;br /&gt;
%=========================================================================&lt;br /&gt;
%&lt;br /&gt;
% Program to estimate level effect in interest rates by GMM&lt;br /&gt;
%&lt;br /&gt;
% Code based on Martin, Hurn and Harris, Econometric Time Series Modelling&lt;br /&gt;
% Specification, Estimation and Testing&lt;br /&gt;
% https://www.cambridge.org/features/econmodelling/chapter10.htm&lt;br /&gt;
% &lt;br /&gt;
% This code by Ralf Becker, March 2021&lt;br /&gt;
% http://eclr.humanities.manchester.ac.uk/index.php/MATLAB&lt;br /&gt;
%=========================================================================&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
clear all;&lt;br /&gt;
clc;&lt;br /&gt;
cd &amp;#039;YOUR DIRECTORY&amp;#039;&lt;br /&gt;
&lt;br /&gt;
% Load data --- monthly December 1946 to February 1991&lt;br /&gt;
%     3 month maturity&lt;br /&gt;
% extracted from the datafile provided by &lt;br /&gt;
% Martin, Hurn and Harris&lt;br /&gt;
% https://www.cambridge.org/features/econmodelling/chapter10.htm&lt;br /&gt;
&lt;br /&gt;
[rt, ~, ~] = xlsread(&amp;#039;US3monthRate.xlsx&amp;#039;);&lt;br /&gt;
&lt;br /&gt;
drt = trimr(rt,2,0) - trimr(rt,1,1); % creates \Delta r_{t+1}&lt;br /&gt;
r1t = trimr(rt,1,1);                 % creates r_t&lt;br /&gt;
r2t = trimr(rt,0,2);&lt;br /&gt;
t   = length(drt);&lt;br /&gt;
&lt;br /&gt;
%% It is typically good practice to visualise the data&lt;br /&gt;
tt = seqa(1946+12/12,1/12,t); % Creates year sequence&lt;br /&gt;
&lt;br /&gt;
subplot(1,2,1);&lt;br /&gt;
plot(tt,r1t);&lt;br /&gt;
title(&amp;#039;r_t&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(1,2,2);&lt;br /&gt;
plot(tt,drt);&lt;br /&gt;
title(&amp;#039;\Delta r_{t+1}&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
%% Estimate the model in the first stage with idendity weighting matrix&lt;br /&gt;
&lt;br /&gt;
ops  = optimset(&amp;#039;LargeScale&amp;#039;,&amp;#039;off&amp;#039;,&amp;#039;Display&amp;#039;,&amp;#039;off&amp;#039;);&lt;br /&gt;
b0   = [0.1;0.1;0.1;1.0];&lt;br /&gt;
w0 = eye(length(b0));&lt;br /&gt;
bgmm1 = fminunc(@(b) qw(b,drt,r1t,w0),b0,ops);&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039;First Stage GMM estimates&amp;#039;)&lt;br /&gt;
disp(bgmm1)&lt;br /&gt;
&lt;br /&gt;
%% Now estimate the optimal weighting matrix&lt;br /&gt;
% using Newey-West estimator&lt;br /&gt;
lmax = 5;   % lag for the NW estimate &lt;br /&gt;
d = meqn(bgmm1,drt,r1t);&lt;br /&gt;
&lt;br /&gt;
% this will calculate Newey-West VCM using lmax lags&lt;br /&gt;
s   = d&amp;#039;*d;&lt;br /&gt;
tau = 1;&lt;br /&gt;
while tau &amp;lt;= lmax&lt;br /&gt;
    wtau = d((tau+1):size(d,1),:)&amp;#039;*d(1:(size(d,1)-tau),:);&lt;br /&gt;
    s    = s + (1.0-tau/(lmax+1))*(wtau + wtau&amp;#039;);&lt;br /&gt;
    tau  = tau + 1;&lt;br /&gt;
end&lt;br /&gt;
w1 = s./t;&lt;br /&gt;
&lt;br /&gt;
% Use this as the weighting matrix for the next pass to the optimisation&lt;br /&gt;
% function&lt;br /&gt;
&lt;br /&gt;
%% 2nd Stage &lt;br /&gt;
&lt;br /&gt;
bgmm2 = fminunc(@(b) qw(b,drt,r1t,w1),bgmm1,ops);&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039;Second Stage GMM estimates&amp;#039;)&lt;br /&gt;
disp(bgmm2)&lt;br /&gt;
&lt;br /&gt;
%% Further Iterations&lt;br /&gt;
% You could run further iterations&lt;br /&gt;
% 1) Re-calculate d&lt;br /&gt;
% 2) Re-calculate the optimal weighting matrix w based on the new d&lt;br /&gt;
% 3) Re-estimate using the new w&lt;br /&gt;
%&lt;br /&gt;
% For now we stop here&lt;br /&gt;
bgmm = bgmm2;&lt;br /&gt;
obj = qw(bgmm,drt,r1t,w1);&lt;br /&gt;
&lt;br /&gt;
%% Calculate standard errors&lt;br /&gt;
% Compute optimal weigthing matrix at GMM estimates&lt;br /&gt;
% using Newey-West estimator&lt;br /&gt;
lmax = 5;   % lag for the NW estimate &lt;br /&gt;
d = meqn(bgmm,drt,r1t);&lt;br /&gt;
&lt;br /&gt;
% this will calculate Newey-West VCM using lmax lags&lt;br /&gt;
s   = d&amp;#039;*d;&lt;br /&gt;
tau = 1;&lt;br /&gt;
while tau &amp;lt;= lmax&lt;br /&gt;
    wtau = d((tau+1):size(d,1),:)&amp;#039;*d(1:(size(d,1)-tau),:);&lt;br /&gt;
    s    = s + (1.0-tau/(lmax+1))*(wtau + wtau&amp;#039;);&lt;br /&gt;
    tau  = tau + 1;&lt;br /&gt;
end&lt;br /&gt;
s = s./t;&lt;br /&gt;
&lt;br /&gt;
% Compute standard errors of GMM estimates&lt;br /&gt;
dg = numgrad(@meaneqn,bgmm,drt,r1t);&lt;br /&gt;
v  = dg&amp;#039;*inv(s)*dg;&lt;br /&gt;
cov = inv(v)/t;&lt;br /&gt;
se = sqrt(diag(cov));&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
disp([&amp;#039;The value of the objective function  = &amp;#039;, num2str(obj) ]);&lt;br /&gt;
disp([&amp;#039;J-test                               = &amp;#039;, num2str(t*obj) ]);&lt;br /&gt;
disp(&amp;#039;Estimates     Std err.   t-stats&amp;#039;);&lt;br /&gt;
disp( [ bgmm  se  bgmm./se ])&lt;br /&gt;
disp([&amp;#039;Newey-West estimator with max lag    = &amp;#039;, num2str(lmax) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
%% Inference t-tests&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 0.0&lt;br /&gt;
stat = (bgmm(4) - 0.0)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=0.0) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 0.5&lt;br /&gt;
stat = (bgmm(4) - 0.5)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=0.5) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 1.0&lt;br /&gt;
stat = (bgmm(4) - 1.0)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=1.0) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 1.5&lt;br /&gt;
stat = (bgmm(4) - 1.5)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=1.5) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
%% Inference - Overidentifying restrictions&lt;br /&gt;
&lt;br /&gt;
%% Plot volatility function for alternative values of gam&lt;br /&gt;
tt = seqa(1946+12/12,1/12,t);&lt;br /&gt;
figure(1)&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,1);&lt;br /&gt;
plot(tt,drt./r1t.^0.0);&lt;br /&gt;
title(&amp;#039;$\gamma=0.0$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,2);&lt;br /&gt;
plot(tt,drt./r1t.^0.5);&lt;br /&gt;
title(&amp;#039;$\gamma=0.5$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,3);&lt;br /&gt;
plot(tt,drt./r1t.^1.0);&lt;br /&gt;
title(&amp;#039;$\gamma=1.0$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,4);&lt;br /&gt;
plot(tt,drt./r1t.^1.5);&lt;br /&gt;
title(&amp;#039;$\gamma=1.5$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
%&lt;br /&gt;
%------------------------- Functions -------------------------------------%&lt;br /&gt;
%&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% Define the moment equations &lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
function dt = meqn(b,drt,r1t)&lt;br /&gt;
    &lt;br /&gt;
        ut = drt - b(1) - b(2)*r1t;&lt;br /&gt;
        zt = [ones(size(ut,1),1),r1t];&lt;br /&gt;
        dt = repmat(ut,1,2).*zt;&lt;br /&gt;
        dt = [dt,repmat((ut.^2 - (b(3)^2)*r1t.^(2*b(4)) ),1,2).*zt];&lt;br /&gt;
   &lt;br /&gt;
end&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% Defines the mean of the moment conditions  &lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
function ret = meaneqn(b,drt,r1t)&lt;br /&gt;
&lt;br /&gt;
        ret = (mean(meqn(b,drt,r1t)))&amp;#039;;&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% GMM objective function with  user defined &lt;br /&gt;
% weighting matrix, w&lt;br /&gt;
%-------------------------------------------------------------------------%   &lt;br /&gt;
function ret = qw(b,drt,r1t,w)&lt;br /&gt;
        &lt;br /&gt;
    t = length(drt);&lt;br /&gt;
    d = meqn(b,drt,r1t);&lt;br /&gt;
    g = mean(d)&amp;#039;;&lt;br /&gt;
&lt;br /&gt;
    ret = g&amp;#039;*inv(w)*g;&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Diff_in_Diff&amp;diff=4264</id>
		<title>Diff in Diff</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Diff_in_Diff&amp;diff=4264"/>
				<updated>2021-04-15T08:16:23Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: Created page with &amp;quot;An excellent example of a Difference-in-Difference application is the one where changes in the Minimum Legal Drinking Age are related to casualties resulting from traffic acci...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;An excellent example of a Difference-in-Difference application is the one where changes in the Minimum Legal Drinking Age are related to casualties resulting from traffic accidents. A walkthrough such an application is available from [https://github.com/datasquad/RforQM/blob/master/Diff_in_Diff/mlda-dd.pdf Ralf Becker&amp;#039;s github page]. The data used in this example are available from here: [https://github.com/datasquad/RforQM/blob/master/Diff_in_Diff/deaths.Rdata deaths.Rdata] and here [https://github.com/datasquad/RforQM/blob/master/Diff_in_Diff/USstate_pop_1980.xlsx USstate_pop_1980.xlsx].&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=R&amp;diff=4263</id>
		<title>R</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=R&amp;diff=4263"/>
				<updated>2021-04-15T08:10:00Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /* Intermediate Techniques */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;R is an open source software that has been been adopted by the statistical community as its standard software package. It is a command driven software, meaning that you will have to give the software written commands to indicate what you do. On first sight this is not as convenient as a menu driven software, but it has the huge advantage that you can collect a large set of commands in a file (script file) and then have R execute all these commands in one go. This then serves as a great documentation of the work you have done and most importantly it makes it easy to change a small aspect of your work and rerun the entire project on the press of a button rather than having to laboriously retrace all your steps through menus.&lt;br /&gt;
&lt;br /&gt;
The fixed cost of learning this software is higher than learning a menu driven statistical software package. But if you engage with this process the rewards will be great.&lt;br /&gt;
&lt;br /&gt;
Last not least, R has a killer advantage. It is free!!!&lt;br /&gt;
&lt;br /&gt;
== Installing the Software ==&lt;br /&gt;
&lt;br /&gt;
[https://youtu.be/EHjakj38Nnw?hd=1 Installation Demonstration]&lt;br /&gt;
&lt;br /&gt;
To work with R you will have to install the basic software package R, but we also advise you to install RStudio, which is an add-on to R (formally called an Integrated Development Environment - IDE) which makes working with R easier.&lt;br /&gt;
&lt;br /&gt;
As this is open-source software that you get for free it is perhaps understandable that the webpages from which you get the R software aren&amp;#039;t as slick as you expect. And the language tends to be somewhat more techy, but don&amp;#039;t worry, you&amp;#039;ll be fine.&lt;br /&gt;
&lt;br /&gt;
So here are the steps you should take. &lt;br /&gt;
&lt;br /&gt;
# Download and install the R software, which is available from the [http://cran.rstudio.com/ CRAN] website. Follow the &amp;quot;Download and Install R&amp;quot; link (and do not be tempted to download the source code!) for your operating system. If you have a window OS only choose the &amp;quot;base&amp;quot; package on the following screen. Then follow the usual installation instructions. You could now already work with R, but we recommend that you first undertake the next step.&lt;br /&gt;
# Once we have installed R, we can download and install RStudio. You can download it from the [http://www.rstudio.com/products/rstudio/download/ RStudio] download page.&lt;br /&gt;
&lt;br /&gt;
The basic R software has some basic functionality, but the power of R comes from the ability to use code written to perform statistical and econometric techniques that has been written by other people. These additional pieces of software are called packages and the next step will be to learn how ot use these.&lt;br /&gt;
&lt;br /&gt;
== Data Sets ==&lt;br /&gt;
&lt;br /&gt;
We use a number of datasets on this page. For convenience they are listed here:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| &lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Women&amp;#039;s wages &lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Crime Statistics&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Baseball Wages&lt;br /&gt;
|-&lt;br /&gt;
| &amp;#039;&amp;#039;&amp;#039;Description&amp;#039;&amp;#039;&amp;#039; &lt;br /&gt;
| Observations for 753 females on wages, familiar and work circumstances hours worked and wages&lt;br /&gt;
| Crime Statistics for 90 counties in North Carolina (US) for Years 1981 to 1987 (Panel Data); includes a number of variables to characterise the counties&lt;br /&gt;
| Salary and other information (such as race, position and performance information) for 353 Baseball Players in 1993&lt;br /&gt;
|-&lt;br /&gt;
| &amp;#039;&amp;#039;&amp;#039;Files&amp;#039;&amp;#039;&amp;#039; &lt;br /&gt;
| [[media:mroz.xls|mroz.xls]] &amp;lt;br&amp;gt; [[media:mroz.csv|mroz.csv]] &amp;lt;br&amp;gt; [[MROZ_Variable_Description|Variable Description]]&lt;br /&gt;
| [[media:crim4.xls|crime4.xls]]  &amp;lt;br&amp;gt; [[media:crim4.csv|crime4.csv]] &amp;lt;br&amp;gt; [[Crim4_Variable_Description|Variable Description]]&lt;br /&gt;
| [[media:mlb1.xls|mlb1.xls]] &amp;lt;br&amp;gt; [[media:mlb1.csv|mlb1.csv]] &amp;lt;br&amp;gt; [[MLB1_Variable_Description|Variable Description]]&lt;br /&gt;
|-&lt;br /&gt;
| &amp;#039;&amp;#039;&amp;#039;Source&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
| [http://www.cengagebrain.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&amp;amp;product_isbn_issn=9781111531041&amp;amp;token=8D04240DC39B22D05B49B265F2C8E62C6876DDE99FE979BC4A500075EC976963ED1045639B2C75C4B5B2337F07088998 Wooldridge Book Companion Page]&lt;br /&gt;
| [http://www.cengagebrain.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&amp;amp;product_isbn_issn=9781111531041&amp;amp;token=8D04240DC39B22D05B49B265F2C8E62C6876DDE99FE979BC4A500075EC976963ED1045639B2C75C4B5B2337F07088998 Wooldridge Book Companion Page]&lt;br /&gt;
| [http://www.cengagebrain.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&amp;amp;product_isbn_issn=9781111531041&amp;amp;token=8D04240DC39B22D05B49B265F2C8E62C6876DDE99FE979BC4A500075EC976963ED1045639B2C75C4B5B2337F07088998 Wooldridge Book Companion Page]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Following the links in the above table you will also be able to download R data files for these datasets.&lt;br /&gt;
&lt;br /&gt;
== Basic Tasks ==&lt;br /&gt;
&lt;br /&gt;
To illustrate how to perform basic tasks in R we will use the Women&amp;#039;s wages dataset ([[media:mroz.csv|mroz.csv]]). This is a comma separated value (csv) file that contains a dataset which we will use for our first steps in R. It is a well used cross-sectional dataset with 753 observations of female members of the labour force in the US (in 1975). It contains variables such as the number of children, the wage, the hours worked etc. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| First Steps&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Loading Data and&amp;lt;br&amp;gt;Date Formats &lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Using&amp;lt;br&amp;gt;Packages&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| [[R_FirstSteps|Discussion]] &lt;br /&gt;
| [[R_Data|Discussion]]&lt;br /&gt;
| [[R_Packages|Discussion]]  &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Basic Data&amp;lt;br&amp;gt;Analysis&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Data Analysis&amp;lt;br&amp;gt;Tidyverse&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| A&amp;lt;br&amp;gt;Regression&lt;br /&gt;
|-&lt;br /&gt;
| [[R_Analysis|Discussion]] &lt;br /&gt;
| [[R_AnalysisTidy|Discussion]] &lt;br /&gt;
| [[R_Regression|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Creating &amp;lt;br&amp;gt; Graphics&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Saving Data and&amp;lt;br&amp;gt;Screen Output&lt;br /&gt;
|-&lt;br /&gt;
| [[R_Graphing|Discussion]] &amp;lt;br&amp;gt; [[R_Graphing_Treat|Treat Yourself]]&lt;br /&gt;
| [[R_SavingData|Discussion]] &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Bread and Butter Techniques ==&lt;br /&gt;
&lt;br /&gt;
These are standard econometric problems tasks that any applied econometrician, and indeed aspiring economics students, should be familiar with.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Dummy&amp;lt;br&amp;gt;variables&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Predicting from&amp;lt;br&amp;gt;a Regression&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| [[Dummy Variables in R|Discussion]] &lt;br /&gt;
| [[Predicting from Regression in R|Discussion]] &lt;br /&gt;
 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Standard&amp;lt;br&amp;gt;inference&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Regression&amp;lt;br&amp;gt;diagnostics&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Robust&amp;lt;br&amp;gt;standard errors&lt;br /&gt;
|-&lt;br /&gt;
| [[Regression Inference in R|Discussion]] &lt;br /&gt;
| [[R_reg_diag|Discussion]] &lt;br /&gt;
| [[R_robust_se|Discussion]]  &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Intermediate Techniques ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Panel&amp;lt;br&amp;gt;Data&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Instrumental Variables&amp;lt;br&amp;gt;Estimation&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Matching&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Difference-in-Difference&lt;br /&gt;
|-&lt;br /&gt;
| [[Panel in R|Discussion]] &lt;br /&gt;
| [[IV in R|Discussion]] &lt;br /&gt;
| [[R_Matching|Discussion]]  &lt;br /&gt;
| [[Diff_in_Diff|Discussion]]  &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Univariate Time&amp;lt;br&amp;gt;Series Modelling&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Multivariate Time&amp;lt;br&amp;gt;Series Modelling&amp;lt;br&amp;gt;VAR&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Time Series&amp;lt;br&amp;gt;Plotting&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Univariate and&amp;lt;br&amp;gt;Multivariate&amp;lt;br&amp;gt;GARCH Modelling&lt;br /&gt;
|-&lt;br /&gt;
| [[R_TimeSeries|Discussion]] &lt;br /&gt;
| [[R_TS_VAR|Discussion]] &lt;br /&gt;
| [[R_TSplots|Discussion]] &amp;lt;br&amp;gt;uses the following data files:&amp;lt;br&amp;gt;[[Media:AggInfl.csv|AggInfl.csv]],[[Media:CoreInfl.csv|CoreInfl.csv]]&amp;lt;br&amp;gt;[[Media:EnergInfl.csv|EnergInfl.csv]],[[Media:FoodInfl.csv|FoodInfl.csv]]&lt;br /&gt;
| [[R_GARCH|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Bayesian Estimation&amp;lt;br&amp;gt;Principle&lt;br /&gt;
|-&lt;br /&gt;
| [[R_BayesGrid|Discussion]] &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Some Fun Stuff ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Plotting &amp;lt;br&amp;gt;Maps&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Scraping &amp;lt;br&amp;gt;the internet&lt;br /&gt;
|-&lt;br /&gt;
| [[Maps in R|Discussion]] &lt;br /&gt;
| [[Scraping in R|Discussion]] &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Econometric Demonstrations ==&lt;br /&gt;
&lt;br /&gt;
In this section you can find code that can be useful to demonstrate a few econometric issues.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Sampling and&amp;lt;br&amp;gt;LLN and CLT&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Demonstrating OLS&amp;lt;br&amp;gt;estimator unbiasedness&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Demonstrating OLS estimator&amp;lt;br&amp;gt;asymptotic behaviour&lt;br /&gt;
|-&lt;br /&gt;
| [[R_Sampling|Discussion]]&lt;br /&gt;
| [[R_Unbiasedness|Discussion]]  &lt;br /&gt;
| [[R_Asymptotics|Discussion]] &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Authors, Maintenance and Contributions ==&lt;br /&gt;
&lt;br /&gt;
This wiki was created by [mailto:ralf.becker@manchester.ac.uk Ralf Becker] and [mailto:james.lincoln@manchester.ac.uk James Lincoln] with the financial support of a University of Manchester CHERIL grant. If you have any suggestions please contact us by email. Contributions to this wiki are encouraged. Please contact us if you are interested.&lt;br /&gt;
&lt;br /&gt;
An easy way to create content for this page is to write RMarkdown documents which can then easily be translated, thanks to pandoc, to MediaWiki format (see [http://nicercode.github.io/guides/reports/]). From the command window call &amp;quot;pandoc -f markdown -t MediaWiki FILENAME.md -o FILENAME.mediawiki&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== More references ==&lt;br /&gt;
&lt;br /&gt;
There is a plethora of resources if you want to learn R (which is one reason why this resource does not go into too much detail). Here are a few places to start.&lt;br /&gt;
&lt;br /&gt;
* A dedicated tweet channel for Econometrics with R [https://twitter.com/Rstats4Econ]&lt;br /&gt;
* Rob Hyndman has great material [http://robjhyndman.com/publications/software/], some of which will be referred to here.&lt;br /&gt;
* My colleague Junanjo Medina has material for criminologists that includes good intros to graphing and some basic statistics [http://jjmedinaariza.github.io/R-for-Criminologists/]&lt;br /&gt;
* [http://www.computerworld.com/article/2497143/business-intelligence-beginner-s-guide-to-r-introduction.html?null A Beginner&amp;#039;s Guide to R]&lt;br /&gt;
* Florian Heiss has written an R companion book to Wooldridge&amp;#039;s Introductory Econometrics. It is available for free [http://www.urfie.net/read/mobile/index.html#p=1 online] but you can also get a [http://www.urfie.net/index.html hardcopy] &lt;br /&gt;
* Some R resources provided by [http://www.ats.ucla.edu/stat/r/ UCLA]&lt;br /&gt;
* [http://www.statmethods.net Quick-R] web-site and [http://www.manning.com/kabacoff2/RiA2E_meap_ch1.pdf first chapter of R in Action]&lt;br /&gt;
* Just TryR it! [http://tryr.codeschool.com/levels/1/challenges/1]&lt;br /&gt;
* Some resource by the UCLA [http://www.ats.ucla.edu/stat/r/]&lt;br /&gt;
* A practice RData file [https://drive.google.com/file/d/0B-eFeuIjpKsOWmdpOUsxT2Via3M/view?usp=sharing], use this to load required packages [https://drive.google.com/file/d/0B-eFeuIjpKsOc3VYYnh1bEtZcnM/view?usp=sharing]&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4262</id>
		<title>MATLAB</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4262"/>
				<updated>2021-03-20T21:34:27Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /*  Special Econometric Topics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;lt;div id=&amp;quot;Essential&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;The Essential MATLAB Programming Techniques ==&lt;br /&gt;
&lt;br /&gt;
In this section we will introduce a number of basic and intermediate programming techniques. Whatever language you program in you will encounter these techniques, although the details will, of course, vary. We recommend that you ensure that you are familiar with these before you progress to [[#SpecEcmtrTopics| Special Econometric Topics ]].&lt;br /&gt;
&lt;br /&gt;
=== Basic Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Basics and&amp;lt;br&amp;gt;Matrices&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Loading Data and&amp;lt;br&amp;gt;Date Formats&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Program Flow and&amp;lt;br&amp;gt;Logicals&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Saving Data and&amp;lt;br&amp;gt;Screen Output&lt;br /&gt;
|-&lt;br /&gt;
| [[Discussion]] &amp;lt;br&amp;gt; [http://www.youtube.com/watch?v=av5MgVpybT0&amp;amp;feature=youtu.be&amp;amp;hd=1 Example Clip]&lt;br /&gt;
| [[LoadingData|Discussion]] &amp;lt;br/&amp;gt;[http://youtu.be/jyb68zGM2ik?hd=1 ExampleClip]&lt;br /&gt;
| [[Program Flow and Logicals|Discussion]]&lt;br /&gt;
| [[Function|Discussion]] &amp;lt;br/&amp;gt; [[FctExampleCode|Example Code]] &amp;lt;br/&amp;gt; [[media:OLSexample.xls|OLSexample.xls]] &amp;lt;br&amp;gt; [http://youtu.be/FPw9DH8pfiU?hd=1 Example Clip]&lt;br /&gt;
| [[SavingData|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
After having gone through these basic techniques you may want to test your newly acquired skills with the following examples.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 1&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 2&lt;br /&gt;
|-&lt;br /&gt;
| [[Example 1]]&lt;br /&gt;
| [[Example 2|Example2a]]&amp;lt;br&amp;gt;[[Example 2b|Example2b]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Intermediate Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Statistical&amp;lt;br&amp;gt;Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Arrays and&amp;lt;br&amp;gt;Structures&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Debugging&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Graphing Data&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Function Handlers&amp;lt;br&amp;gt; Anonymous Functions&lt;br /&gt;
|-&lt;br /&gt;
| [[StatFunct|Discussion]]&lt;br /&gt;
| [[ArrayStructures|Discussion]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Graphing|Discussion]]&lt;br /&gt;
| [[Anonym|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Advanced Programming ===&lt;br /&gt;
&lt;br /&gt;
Sorry, but this cannot be taught! It will come with experience. Find someone who has experience in MATLAB programming and let him or her look over your code.&lt;br /&gt;
&lt;br /&gt;
== Nonlinear Optimisation ==&lt;br /&gt;
&lt;br /&gt;
The optimal parameters in a linear econometric model (assuming certain assumptions) can be found analytically. We call them the Ordinary Least Squares (OLS) estimates and they are easily calculated with a certain formula (see the [[FctExampleCode#OLSestm|OLSest.m]] function). When econometric models do not have such an analytical solution, an alternative parameter estimation strategy is required. In essence it is a clever &amp;quot;trial and error&amp;quot; strategy. This is often called nonlinear optimisation.&lt;br /&gt;
&lt;br /&gt;
Nonlinear optimisation is a very important, but also a very tricky area of econometric computing. It certainly helps to understand some of the underlying theory and therefore we have below separate sections on the theory and implementation.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Theory&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Implementation&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Constrained &amp;lt;br&amp;gt;Optimisation&lt;br /&gt;
|-&lt;br /&gt;
| [[NonlinOptTheory| Discussion]]&lt;br /&gt;
| [[NonlinOptImp| Discussion]]&lt;br /&gt;
| [[ConNonlinOptImp| Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;div id=&amp;quot;SpecEcmtrTopics&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; Special Econometric Topics ==&lt;br /&gt;
&lt;br /&gt;
Topics in this Section will assume that you have mastered all the techniques covered in the [[#Essential| Essential Programming Section ]]&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Robust standard&amp;lt;br&amp;gt;errors&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Univariate&amp;lt;br&amp;gt;Time Series&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Unit Root and&amp;lt;br&amp;gt;Stationarity Testing&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Forecasting&lt;br /&gt;
|-&lt;br /&gt;
| [[RobInf|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeOLShac|Example Code]]&lt;br /&gt;
| [[UniTS|Discussion]]&amp;lt;br&amp;gt;[[media:FXrateUSEU.xls|FXrateUSEU.xls]]&amp;lt;br&amp;gt;[[media:USGDP.xls|USGDP.xls]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Forecasting|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Maximum&amp;lt;br&amp;gt;Likelihood&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Generalized&amp;lt;br&amp;gt;Methods of Moments&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Instrumental&amp;lt;br&amp;gt;Variables&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Bayesian&amp;lt;br&amp;gt;Estimation&lt;br /&gt;
|-&lt;br /&gt;
| [[MaxLik|Discussion]]&amp;lt;br&amp;gt;[[MaxLikCode|Example Code]]&lt;br /&gt;
| [[GMM|Basic Code]]&amp;lt;br&amp;gt; [https://youtu.be/NeUiDYr3ML0 video] &amp;lt;br&amp;gt;[[media:US3monthRate.xlsx|US3monthRate.xlsx]]&amp;lt;br&amp;gt;[[media: gradp.m|gradp.m]]&lt;br /&gt;
| [[IV|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeIV|Example Code]]&lt;br /&gt;
| [[Bayes|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Monte-Carlo/&amp;lt;br&amp;gt;Simulation Techniques&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Binary Response&amp;lt;br&amp;gt;Models&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Handling High&amp;lt;br&amp;gt;Frequency Data&lt;br /&gt;
|-&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Other useful MATLAB resources ==&lt;br /&gt;
&lt;br /&gt;
=== The MATLAB Software ===&lt;br /&gt;
&lt;br /&gt;
The software is available on University of Manchester Computer Labs. If you make regular use of MATLAB you should consider purchasing your own software. The Student Version of MATLAB is available, for instance, from [http://www.amazon.co.uk/MATLAB-Simulink-Student-Version-R2014a/dp/0989614026/ref=sr_1_1?s=software&amp;amp;ie=UTF8&amp;amp;qid=1411983990&amp;amp;sr=1-1&amp;amp;keywords=matlab+2014 Amazon] for £66. This is a real bargain, considering that the equivalent non-discounted package would come in at about £4,000.&lt;br /&gt;
&lt;br /&gt;
=== Freely available toolboxes ===&lt;br /&gt;
&lt;br /&gt;
The following toolboxes are freely available and contain extremely useful procedures&lt;br /&gt;
&lt;br /&gt;
* Spatial Econometrics by James P. LeSage [http://www.spatial-econometrics.com/]. This toolbox contains a wide variety of useful econometrics functions. It also contains an excellent documentation. In addition to quite general econometric functions you will, as the name suggests, find a huge list of functions relevant if you are working with spatial data.&lt;br /&gt;
* &amp;lt;div id=&amp;quot;MFEtoolbox&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;Oxford MFE toolbox by Kevin Sheppard [https://bitbucket.org/kevinsheppard/mfe_toolbox]. Use the download link in the box on the right that starts with &amp;quot;Owner: Kevin Sheppard&amp;quot;. This toolbox contains many useful functions for uni- and multivariate volatility models.&lt;br /&gt;
&lt;br /&gt;
You need to copy these toolboxes into your MATLAB toolbox folder and add the respective path to the MATLAB list of folders it searches for functions. (In the main menu select FILE and then SET PATH where you can add the folders you added.) If you work on a computer for which you have no administrator rights, this strategy may not work. This [http://youtu.be/_32OqcW9WoY?hd=1 Example Clip] demonstrates what to do in that case. It is just a matter of adding one line into your code! Piece of cake.&lt;br /&gt;
&lt;br /&gt;
=== Literature and other learning resources ===&lt;br /&gt;
* [http://www.kevinsheppard.com/wiki/MFE_Toolbox: Kevin Sheppard&amp;#039;s MATLAB introduction].&lt;br /&gt;
* Martin V., Hurn S. and Harris D. (2012) Econometric Modelling with Time Series: Specification, Estimation and Testing (Themes in Modern Econometrics).[http://www.amazon.co.uk/Econometric-Modelling-Time-Specification-Econometrics/dp/0521196604/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1345214275&amp;amp;sr=1-1] This book contains an extensive library of relevant MATLAB codes.&lt;br /&gt;
* Higham, D.J. and Higham, N.J. (2005) MATLAB Guide, Society for Industrial and Applied Mathematics [http://www.amazon.co.uk/MATLAB-Guide-Desmond-J-Higham/dp/0898715784/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1347377409&amp;amp;sr=1-1]&lt;br /&gt;
This website does not cover any theoretical ground and is no substitute for any Econometric Textbook. There is a wide range of very good Econometric Textbooks available. If you are concerned about programming in MATLAB than you are likely to appreciate textbooks that use matrix notation. Here are two very good books that fit that bill:&lt;br /&gt;
* Heij C., de Boer P., Franses P.H., Kloek T. and van Dijk H.K (2004) Econometric Methods with Applications in Business and Economics, Oxford University Press, New York.[http://www.amazon.co.uk/Econometric-Methods-Applications-Business-Economics/dp/0199268010/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1354473313&amp;amp;sr=1-1]&lt;br /&gt;
* Greene W.H. (2012) Econometric Analysis, Pearson, Harlow.[http://www.amazon.co.uk/Econometric-Analysis-William-H-Greene/dp/0273753568/ref=sr_1_1?ie=UTF8&amp;amp;qid=1354473593&amp;amp;sr=8-1]&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=GMM&amp;diff=4261</id>
		<title>GMM</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=GMM&amp;diff=4261"/>
				<updated>2021-03-20T15:52:57Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
%=========================================================================&lt;br /&gt;
%&lt;br /&gt;
% Program to estimate level effect in interest rates by GMM&lt;br /&gt;
%&lt;br /&gt;
% Code based on Martin, Hurn and Harris, Econometric Time Series Modelling&lt;br /&gt;
% Specification, Estimation and Testing&lt;br /&gt;
% https://www.cambridge.org/features/econmodelling/chapter10.htm&lt;br /&gt;
% &lt;br /&gt;
% This code by Ralf Becker, March 2021&lt;br /&gt;
% http://eclr.humanities.manchester.ac.uk/index.php/MATLAB&lt;br /&gt;
%=========================================================================&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
clear all;&lt;br /&gt;
clc;&lt;br /&gt;
cd &amp;#039;YOUR DIRECTORY&amp;#039;&lt;br /&gt;
&lt;br /&gt;
% Load data --- monthly December 1946 to February 1991&lt;br /&gt;
%     3 month maturity&lt;br /&gt;
% extracted from the datafile provided by &lt;br /&gt;
% Martin, Hurn and Harris&lt;br /&gt;
% https://www.cambridge.org/features/econmodelling/chapter10.htm&lt;br /&gt;
&lt;br /&gt;
[rt, ~, ~] = xlsread(&amp;#039;US3monthRate.xlsx&amp;#039;);&lt;br /&gt;
&lt;br /&gt;
drt = trimr(rt,2,0) - trimr(rt,1,1); % creates \Delta r_{t+1}&lt;br /&gt;
r1t = trimr(rt,1,1);                 % creates r_t&lt;br /&gt;
r2t = trimr(rt,0,2);&lt;br /&gt;
t   = length(drt);&lt;br /&gt;
&lt;br /&gt;
%% It is typically good practice to visualise the data&lt;br /&gt;
tt = seqa(1946+12/12,1/12,t); % Creates year sequence&lt;br /&gt;
&lt;br /&gt;
subplot(1,2,1);&lt;br /&gt;
plot(tt,r1t);&lt;br /&gt;
title(&amp;#039;r_t&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(1,2,2);&lt;br /&gt;
plot(tt,drt);&lt;br /&gt;
title(&amp;#039;\Delta r_{t+1}&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
%% Estimate the model in the first stage with idendity weighting matrix&lt;br /&gt;
&lt;br /&gt;
ops  = optimset(&amp;#039;LargeScale&amp;#039;,&amp;#039;off&amp;#039;,&amp;#039;Display&amp;#039;,&amp;#039;off&amp;#039;);&lt;br /&gt;
b0   = [0.1;0.1;0.1;1.0];&lt;br /&gt;
w0 = eye(length(b0));&lt;br /&gt;
bgmm1 = fminunc(@(b) qw(b,drt,r1t,w0),b0,ops);&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039;First Stage GMM estimates&amp;#039;)&lt;br /&gt;
disp(bgmm1)&lt;br /&gt;
&lt;br /&gt;
%% Now estimate the optimal weighting matrix&lt;br /&gt;
% using Newey-West estimator&lt;br /&gt;
lmax = 5;   % lag for the NW estimate &lt;br /&gt;
d = meqn(bgmm1,drt,r1t);&lt;br /&gt;
&lt;br /&gt;
% this will calculate Newey-West VCM using lmax lags&lt;br /&gt;
s   = d&amp;#039;*d;&lt;br /&gt;
tau = 1;&lt;br /&gt;
while tau &amp;lt;= lmax&lt;br /&gt;
    wtau = d((tau+1):size(d,1),:)&amp;#039;*d(1:(size(d,1)-tau),:);&lt;br /&gt;
    s    = s + (1.0-tau/(lmax+1))*(wtau + wtau&amp;#039;);&lt;br /&gt;
    tau  = tau + 1;&lt;br /&gt;
end&lt;br /&gt;
w1 = s./t;&lt;br /&gt;
&lt;br /&gt;
% Use this as the weighting matrix for the next pass to the optimisation&lt;br /&gt;
% function&lt;br /&gt;
&lt;br /&gt;
%% 2nd Stage &lt;br /&gt;
&lt;br /&gt;
bgmm2 = fminunc(@(b) qw(b,drt,r1t,w1),bgmm1,ops);&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039;Second Stage GMM estimates&amp;#039;)&lt;br /&gt;
disp(bgmm2)&lt;br /&gt;
&lt;br /&gt;
%% Further Iterations&lt;br /&gt;
% You could run further iterations&lt;br /&gt;
% 1) Re-calculate d&lt;br /&gt;
% 2) Re-calculate the optimal weighting matrix w based on the new d&lt;br /&gt;
% 3) Re-estimate using the new w&lt;br /&gt;
%&lt;br /&gt;
% For now we stop here&lt;br /&gt;
bgmm = bgmm2;&lt;br /&gt;
obj = qw(bgmm,drt,r1t,w1);&lt;br /&gt;
&lt;br /&gt;
%% Calculate standard errors&lt;br /&gt;
% Compute optimal weigthing matrix at GMM estimates&lt;br /&gt;
% using Newey-West estimator&lt;br /&gt;
lmax = 5;   % lag for the NW estimate &lt;br /&gt;
d = meqn(bgmm,drt,r1t);&lt;br /&gt;
&lt;br /&gt;
% this will calculate Newey-West VCM using lmax lags&lt;br /&gt;
s   = d&amp;#039;*d;&lt;br /&gt;
tau = 1;&lt;br /&gt;
while tau &amp;lt;= lmax&lt;br /&gt;
    wtau = d((tau+1):size(d,1),:)&amp;#039;*d(1:(size(d,1)-tau),:);&lt;br /&gt;
    s    = s + (1.0-tau/(lmax+1))*(wtau + wtau&amp;#039;);&lt;br /&gt;
    tau  = tau + 1;&lt;br /&gt;
end&lt;br /&gt;
s = s./t;&lt;br /&gt;
&lt;br /&gt;
% Compute standard errors of GMM estimates&lt;br /&gt;
dg = numgrad(@meaneqn,bgmm,drt,r1t);&lt;br /&gt;
v  = dg&amp;#039;*inv(s)*dg;&lt;br /&gt;
cov = inv(v)/t;&lt;br /&gt;
se = sqrt(diag(cov));&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
disp([&amp;#039;The value of the objective function  = &amp;#039;, num2str(obj) ]);&lt;br /&gt;
disp([&amp;#039;J-test                               = &amp;#039;, num2str(t*obj) ]);&lt;br /&gt;
disp(&amp;#039;Estimates     Std err.   t-stats&amp;#039;);&lt;br /&gt;
disp( [ bgmm  se  bgmm./se ])&lt;br /&gt;
disp([&amp;#039;Newey-West estimator with max lag    = &amp;#039;, num2str(lmax) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
%% Inference t-tests&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 0.0&lt;br /&gt;
stat = (bgmm(4) - 0.0)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=0.0) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 0.5&lt;br /&gt;
stat = (bgmm(4) - 0.5)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=0.5) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 1.0&lt;br /&gt;
stat = (bgmm(4) - 1.0)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=1.0) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 1.5&lt;br /&gt;
stat = (bgmm(4) - 1.5)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=1.5) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
%% Inference - Overidentifying restrictions&lt;br /&gt;
&lt;br /&gt;
%% Plot volatility function for alternative values of gam&lt;br /&gt;
tt = seqa(1946+12/12,1/12,t);&lt;br /&gt;
figure(1)&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,1);&lt;br /&gt;
plot(tt,drt./r1t.^0.0);&lt;br /&gt;
title(&amp;#039;$\gamma=0.0$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,2);&lt;br /&gt;
plot(tt,drt./r1t.^0.5);&lt;br /&gt;
title(&amp;#039;$\gamma=0.5$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,3);&lt;br /&gt;
plot(tt,drt./r1t.^1.0);&lt;br /&gt;
title(&amp;#039;$\gamma=1.0$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,4);&lt;br /&gt;
plot(tt,drt./r1t.^1.5);&lt;br /&gt;
title(&amp;#039;$\gamma=1.5$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
%&lt;br /&gt;
%------------------------- Functions -------------------------------------%&lt;br /&gt;
%&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% Define the moment equations &lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
function dt = meqn(b,drt,r1t)&lt;br /&gt;
    &lt;br /&gt;
        ut = drt - b(1) - b(2)*r1t;&lt;br /&gt;
        zt = [ones(size(ut,1),1),r1t];&lt;br /&gt;
        dt = repmat(ut,1,2).*zt;&lt;br /&gt;
        dt = [dt,repmat((ut.^2 - (b(3)^2)*r1t.^(2*b(4)) ),1,2).*zt];&lt;br /&gt;
   &lt;br /&gt;
end&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% Defines the mean of the moment conditions  &lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
function ret = meaneqn(b,drt,r1t)&lt;br /&gt;
&lt;br /&gt;
        ret = (mean(meqn(b,drt,r1t)))&amp;#039;;&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% GMM objective function with  user defined &lt;br /&gt;
% weighting matrix, w&lt;br /&gt;
%-------------------------------------------------------------------------%   &lt;br /&gt;
function ret = qw(b,drt,r1t,w)&lt;br /&gt;
        &lt;br /&gt;
    t = length(drt);&lt;br /&gt;
    d = meqn(b,drt,r1t);&lt;br /&gt;
    g = mean(d)&amp;#039;;&lt;br /&gt;
&lt;br /&gt;
    ret = g&amp;#039;*inv(w)*g;&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=GMM&amp;diff=4260</id>
		<title>GMM</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=GMM&amp;diff=4260"/>
				<updated>2021-03-20T15:51:50Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: Created page with &amp;quot;%========================================================================= % % Program to estimate level effect in interest rates by GMM % % Code based on Martin, Hurn and Har...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;%=========================================================================&lt;br /&gt;
%&lt;br /&gt;
% Program to estimate level effect in interest rates by GMM&lt;br /&gt;
%&lt;br /&gt;
% Code based on Martin, Hurn and Harris, Econometric Time Series Modelling&lt;br /&gt;
% Specification, Estimation and Testing&lt;br /&gt;
% https://www.cambridge.org/features/econmodelling/chapter10.htm&lt;br /&gt;
% &lt;br /&gt;
% This code by Ralf Becker, March 2021&lt;br /&gt;
% http://eclr.humanities.manchester.ac.uk/index.php/MATLAB&lt;br /&gt;
%=========================================================================&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
clear all;&lt;br /&gt;
clc;&lt;br /&gt;
cd &amp;#039;YOUR DIRECTORY&amp;#039;&lt;br /&gt;
&lt;br /&gt;
% Load data --- monthly December 1946 to February 1991&lt;br /&gt;
%     3 month maturity&lt;br /&gt;
% extracted from the datafile provided by &lt;br /&gt;
% Martin, Hurn and Harris&lt;br /&gt;
% https://www.cambridge.org/features/econmodelling/chapter10.htm&lt;br /&gt;
&lt;br /&gt;
[rt, ~, ~] = xlsread(&amp;#039;US3monthRate.xlsx&amp;#039;);&lt;br /&gt;
&lt;br /&gt;
drt = trimr(rt,2,0) - trimr(rt,1,1); % creates \Delta r_{t+1}&lt;br /&gt;
r1t = trimr(rt,1,1);                 % creates r_t&lt;br /&gt;
r2t = trimr(rt,0,2);&lt;br /&gt;
t   = length(drt);&lt;br /&gt;
&lt;br /&gt;
%% It is typically good practice to visualise the data&lt;br /&gt;
tt = seqa(1946+12/12,1/12,t); % Creates year sequence&lt;br /&gt;
&lt;br /&gt;
subplot(1,2,1);&lt;br /&gt;
plot(tt,r1t);&lt;br /&gt;
title(&amp;#039;r_t&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(1,2,2);&lt;br /&gt;
plot(tt,drt);&lt;br /&gt;
title(&amp;#039;\Delta r_{t+1}&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
%% Estimate the model in the first stage with idendity weighting matrix&lt;br /&gt;
&lt;br /&gt;
ops  = optimset(&amp;#039;LargeScale&amp;#039;,&amp;#039;off&amp;#039;,&amp;#039;Display&amp;#039;,&amp;#039;off&amp;#039;);&lt;br /&gt;
b0   = [0.1;0.1;0.1;1.0];&lt;br /&gt;
w0 = eye(length(b0));&lt;br /&gt;
bgmm1 = fminunc(@(b) qw(b,drt,r1t,w0),b0,ops);&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039;First Stage GMM estimates&amp;#039;)&lt;br /&gt;
disp(bgmm1)&lt;br /&gt;
&lt;br /&gt;
%% Now estimate the optimal weighting matrix&lt;br /&gt;
% using Newey-West estimator&lt;br /&gt;
lmax = 5;   % lag for the NW estimate &lt;br /&gt;
d = meqn(bgmm1,drt,r1t);&lt;br /&gt;
&lt;br /&gt;
% this will calculate Newey-West VCM using lmax lags&lt;br /&gt;
s   = d&amp;#039;*d;&lt;br /&gt;
tau = 1;&lt;br /&gt;
while tau &amp;lt;= lmax&lt;br /&gt;
    wtau = d((tau+1):size(d,1),:)&amp;#039;*d(1:(size(d,1)-tau),:);&lt;br /&gt;
    s    = s + (1.0-tau/(lmax+1))*(wtau + wtau&amp;#039;);&lt;br /&gt;
    tau  = tau + 1;&lt;br /&gt;
end&lt;br /&gt;
w1 = s./t;&lt;br /&gt;
&lt;br /&gt;
% Use this as the weighting matrix for the next pass to the optimisation&lt;br /&gt;
% function&lt;br /&gt;
&lt;br /&gt;
%% 2nd Stage &lt;br /&gt;
&lt;br /&gt;
bgmm2 = fminunc(@(b) qw(b,drt,r1t,w1),bgmm1,ops);&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039;Second Stage GMM estimates&amp;#039;)&lt;br /&gt;
disp(bgmm2)&lt;br /&gt;
&lt;br /&gt;
%% Further Iterations&lt;br /&gt;
% You could run further iterations&lt;br /&gt;
% 1) Re-calculate d&lt;br /&gt;
% 2) Re-calculate the optimal weighting matrix w based on the new d&lt;br /&gt;
% 3) Re-estimate using the new w&lt;br /&gt;
%&lt;br /&gt;
% For now we stop here&lt;br /&gt;
bgmm = bgmm2;&lt;br /&gt;
obj = qw(bgmm,drt,r1t,w1);&lt;br /&gt;
&lt;br /&gt;
%% Calculate standard errors&lt;br /&gt;
% Compute optimal weigthing matrix at GMM estimates&lt;br /&gt;
% using Newey-West estimator&lt;br /&gt;
lmax = 5;   % lag for the NW estimate &lt;br /&gt;
d = meqn(bgmm,drt,r1t);&lt;br /&gt;
&lt;br /&gt;
% this will calculate Newey-West VCM using lmax lags&lt;br /&gt;
s   = d&amp;#039;*d;&lt;br /&gt;
tau = 1;&lt;br /&gt;
while tau &amp;lt;= lmax&lt;br /&gt;
    wtau = d((tau+1):size(d,1),:)&amp;#039;*d(1:(size(d,1)-tau),:);&lt;br /&gt;
    s    = s + (1.0-tau/(lmax+1))*(wtau + wtau&amp;#039;);&lt;br /&gt;
    tau  = tau + 1;&lt;br /&gt;
end&lt;br /&gt;
s = s./t;&lt;br /&gt;
&lt;br /&gt;
% Compute standard errors of GMM estimates&lt;br /&gt;
dg = numgrad(@meaneqn,bgmm,drt,r1t);&lt;br /&gt;
v  = dg&amp;#039;*inv(s)*dg;&lt;br /&gt;
cov = inv(v)/t;&lt;br /&gt;
se = sqrt(diag(cov));&lt;br /&gt;
&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
disp([&amp;#039;The value of the objective function  = &amp;#039;, num2str(obj) ]);&lt;br /&gt;
disp([&amp;#039;J-test                               = &amp;#039;, num2str(t*obj) ]);&lt;br /&gt;
disp(&amp;#039;Estimates     Std err.   t-stats&amp;#039;);&lt;br /&gt;
disp( [ bgmm  se  bgmm./se ])&lt;br /&gt;
disp([&amp;#039;Newey-West estimator with max lag    = &amp;#039;, num2str(lmax) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
%% Inference t-tests&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 0.0&lt;br /&gt;
stat = (bgmm(4) - 0.0)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=0.0) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 0.5&lt;br /&gt;
stat = (bgmm(4) - 0.5)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=0.5) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 1.0&lt;br /&gt;
stat = (bgmm(4) - 1.0)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=1.0) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
% Test of gam = 1.5&lt;br /&gt;
stat = (bgmm(4) - 1.5)/se(4);&lt;br /&gt;
disp([&amp;#039;Test of (gam=1.5) = &amp;#039;, num2str(stat) ]);&lt;br /&gt;
disp([&amp;#039;p-value           = &amp;#039;, num2str(2*(1-normcdf(abs(stat)))) ]);&lt;br /&gt;
disp(&amp;#039; &amp;#039;);&lt;br /&gt;
&lt;br /&gt;
%% Inference - Overidentifying restrictions&lt;br /&gt;
&lt;br /&gt;
%% Plot volatility function for alternative values of gam&lt;br /&gt;
tt = seqa(1946+12/12,1/12,t);&lt;br /&gt;
figure(1)&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,1);&lt;br /&gt;
plot(tt,drt./r1t.^0.0);&lt;br /&gt;
title(&amp;#039;$\gamma=0.0$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,2);&lt;br /&gt;
plot(tt,drt./r1t.^0.5);&lt;br /&gt;
title(&amp;#039;$\gamma=0.5$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,3);&lt;br /&gt;
plot(tt,drt./r1t.^1.0);&lt;br /&gt;
title(&amp;#039;$\gamma=1.0$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
subplot(2,2,4);&lt;br /&gt;
plot(tt,drt./r1t.^1.5);&lt;br /&gt;
title(&amp;#039;$\gamma=1.5$&amp;#039;)&lt;br /&gt;
box off&lt;br /&gt;
axis tight&lt;br /&gt;
&lt;br /&gt;
%&lt;br /&gt;
%------------------------- Functions -------------------------------------%&lt;br /&gt;
%&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% Define the moment equations &lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
function dt = meqn(b,drt,r1t)&lt;br /&gt;
    &lt;br /&gt;
        ut = drt - b(1) - b(2)*r1t;&lt;br /&gt;
        zt = [ones(size(ut,1),1),r1t];&lt;br /&gt;
        dt = repmat(ut,1,2).*zt;&lt;br /&gt;
        dt = [dt,repmat((ut.^2 - (b(3)^2)*r1t.^(2*b(4)) ),1,2).*zt];&lt;br /&gt;
   &lt;br /&gt;
end&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% Defines the mean of the moment conditions  &lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
function ret = meaneqn(b,drt,r1t)&lt;br /&gt;
&lt;br /&gt;
        ret = (mean(meqn(b,drt,r1t)))&amp;#039;;&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
%-------------------------------------------------------------------------%&lt;br /&gt;
% GMM objective function with  user defined &lt;br /&gt;
% weighting matrix, w&lt;br /&gt;
%-------------------------------------------------------------------------%   &lt;br /&gt;
function ret = qw(b,drt,r1t,w)&lt;br /&gt;
        &lt;br /&gt;
    t = length(drt);&lt;br /&gt;
    d = meqn(b,drt,r1t);&lt;br /&gt;
    g = mean(d)&amp;#039;;&lt;br /&gt;
&lt;br /&gt;
    ret = g&amp;#039;*inv(w)*g;&lt;br /&gt;
end&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4259</id>
		<title>MATLAB</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4259"/>
				<updated>2021-03-20T15:51:18Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /*  Special Econometric Topics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;lt;div id=&amp;quot;Essential&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;The Essential MATLAB Programming Techniques ==&lt;br /&gt;
&lt;br /&gt;
In this section we will introduce a number of basic and intermediate programming techniques. Whatever language you program in you will encounter these techniques, although the details will, of course, vary. We recommend that you ensure that you are familiar with these before you progress to [[#SpecEcmtrTopics| Special Econometric Topics ]].&lt;br /&gt;
&lt;br /&gt;
=== Basic Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Basics and&amp;lt;br&amp;gt;Matrices&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Loading Data and&amp;lt;br&amp;gt;Date Formats&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Program Flow and&amp;lt;br&amp;gt;Logicals&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Saving Data and&amp;lt;br&amp;gt;Screen Output&lt;br /&gt;
|-&lt;br /&gt;
| [[Discussion]] &amp;lt;br&amp;gt; [http://www.youtube.com/watch?v=av5MgVpybT0&amp;amp;feature=youtu.be&amp;amp;hd=1 Example Clip]&lt;br /&gt;
| [[LoadingData|Discussion]] &amp;lt;br/&amp;gt;[http://youtu.be/jyb68zGM2ik?hd=1 ExampleClip]&lt;br /&gt;
| [[Program Flow and Logicals|Discussion]]&lt;br /&gt;
| [[Function|Discussion]] &amp;lt;br/&amp;gt; [[FctExampleCode|Example Code]] &amp;lt;br/&amp;gt; [[media:OLSexample.xls|OLSexample.xls]] &amp;lt;br&amp;gt; [http://youtu.be/FPw9DH8pfiU?hd=1 Example Clip]&lt;br /&gt;
| [[SavingData|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
After having gone through these basic techniques you may want to test your newly acquired skills with the following examples.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 1&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 2&lt;br /&gt;
|-&lt;br /&gt;
| [[Example 1]]&lt;br /&gt;
| [[Example 2|Example2a]]&amp;lt;br&amp;gt;[[Example 2b|Example2b]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Intermediate Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Statistical&amp;lt;br&amp;gt;Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Arrays and&amp;lt;br&amp;gt;Structures&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Debugging&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Graphing Data&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Function Handlers&amp;lt;br&amp;gt; Anonymous Functions&lt;br /&gt;
|-&lt;br /&gt;
| [[StatFunct|Discussion]]&lt;br /&gt;
| [[ArrayStructures|Discussion]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Graphing|Discussion]]&lt;br /&gt;
| [[Anonym|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Advanced Programming ===&lt;br /&gt;
&lt;br /&gt;
Sorry, but this cannot be taught! It will come with experience. Find someone who has experience in MATLAB programming and let him or her look over your code.&lt;br /&gt;
&lt;br /&gt;
== Nonlinear Optimisation ==&lt;br /&gt;
&lt;br /&gt;
The optimal parameters in a linear econometric model (assuming certain assumptions) can be found analytically. We call them the Ordinary Least Squares (OLS) estimates and they are easily calculated with a certain formula (see the [[FctExampleCode#OLSestm|OLSest.m]] function). When econometric models do not have such an analytical solution, an alternative parameter estimation strategy is required. In essence it is a clever &amp;quot;trial and error&amp;quot; strategy. This is often called nonlinear optimisation.&lt;br /&gt;
&lt;br /&gt;
Nonlinear optimisation is a very important, but also a very tricky area of econometric computing. It certainly helps to understand some of the underlying theory and therefore we have below separate sections on the theory and implementation.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Theory&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Implementation&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Constrained &amp;lt;br&amp;gt;Optimisation&lt;br /&gt;
|-&lt;br /&gt;
| [[NonlinOptTheory| Discussion]]&lt;br /&gt;
| [[NonlinOptImp| Discussion]]&lt;br /&gt;
| [[ConNonlinOptImp| Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;div id=&amp;quot;SpecEcmtrTopics&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; Special Econometric Topics ==&lt;br /&gt;
&lt;br /&gt;
Topics in this Section will assume that you have mastered all the techniques covered in the [[#Essential| Essential Programming Section ]]&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Robust standard&amp;lt;br&amp;gt;errors&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Univariate&amp;lt;br&amp;gt;Time Series&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Unit Root and&amp;lt;br&amp;gt;Stationarity Testing&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Forecasting&lt;br /&gt;
|-&lt;br /&gt;
| [[RobInf|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeOLShac|Example Code]]&lt;br /&gt;
| [[UniTS|Discussion]]&amp;lt;br&amp;gt;[[media:FXrateUSEU.xls|FXrateUSEU.xls]]&amp;lt;br&amp;gt;[[media:USGDP.xls|USGDP.xls]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Forecasting|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Maximum&amp;lt;br&amp;gt;Likelihood&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Generalized&amp;lt;br&amp;gt;Methods of Moments&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Instrumental&amp;lt;br&amp;gt;Variables&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Bayesian&amp;lt;br&amp;gt;Estimation&lt;br /&gt;
|-&lt;br /&gt;
| [[MaxLik|Discussion]]&amp;lt;br&amp;gt;[[MaxLikCode|Example Code]]&lt;br /&gt;
| [[GMM|Basic Code]]&amp;lt;br&amp;gt; Video &amp;lt;br&amp;gt;[[media:US3monthRate.xlsx|US3monthRate.xlsx]]&amp;lt;br&amp;gt;[[media: gradp.m|gradp.m]]&lt;br /&gt;
| [[IV|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeIV|Example Code]]&lt;br /&gt;
| [[Bayes|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Monte-Carlo/&amp;lt;br&amp;gt;Simulation Techniques&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Binary Response&amp;lt;br&amp;gt;Models&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Handling High&amp;lt;br&amp;gt;Frequency Data&lt;br /&gt;
|-&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Other useful MATLAB resources ==&lt;br /&gt;
&lt;br /&gt;
=== The MATLAB Software ===&lt;br /&gt;
&lt;br /&gt;
The software is available on University of Manchester Computer Labs. If you make regular use of MATLAB you should consider purchasing your own software. The Student Version of MATLAB is available, for instance, from [http://www.amazon.co.uk/MATLAB-Simulink-Student-Version-R2014a/dp/0989614026/ref=sr_1_1?s=software&amp;amp;ie=UTF8&amp;amp;qid=1411983990&amp;amp;sr=1-1&amp;amp;keywords=matlab+2014 Amazon] for £66. This is a real bargain, considering that the equivalent non-discounted package would come in at about £4,000.&lt;br /&gt;
&lt;br /&gt;
=== Freely available toolboxes ===&lt;br /&gt;
&lt;br /&gt;
The following toolboxes are freely available and contain extremely useful procedures&lt;br /&gt;
&lt;br /&gt;
* Spatial Econometrics by James P. LeSage [http://www.spatial-econometrics.com/]. This toolbox contains a wide variety of useful econometrics functions. It also contains an excellent documentation. In addition to quite general econometric functions you will, as the name suggests, find a huge list of functions relevant if you are working with spatial data.&lt;br /&gt;
* &amp;lt;div id=&amp;quot;MFEtoolbox&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;Oxford MFE toolbox by Kevin Sheppard [https://bitbucket.org/kevinsheppard/mfe_toolbox]. Use the download link in the box on the right that starts with &amp;quot;Owner: Kevin Sheppard&amp;quot;. This toolbox contains many useful functions for uni- and multivariate volatility models.&lt;br /&gt;
&lt;br /&gt;
You need to copy these toolboxes into your MATLAB toolbox folder and add the respective path to the MATLAB list of folders it searches for functions. (In the main menu select FILE and then SET PATH where you can add the folders you added.) If you work on a computer for which you have no administrator rights, this strategy may not work. This [http://youtu.be/_32OqcW9WoY?hd=1 Example Clip] demonstrates what to do in that case. It is just a matter of adding one line into your code! Piece of cake.&lt;br /&gt;
&lt;br /&gt;
=== Literature and other learning resources ===&lt;br /&gt;
* [http://www.kevinsheppard.com/wiki/MFE_Toolbox: Kevin Sheppard&amp;#039;s MATLAB introduction].&lt;br /&gt;
* Martin V., Hurn S. and Harris D. (2012) Econometric Modelling with Time Series: Specification, Estimation and Testing (Themes in Modern Econometrics).[http://www.amazon.co.uk/Econometric-Modelling-Time-Specification-Econometrics/dp/0521196604/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1345214275&amp;amp;sr=1-1] This book contains an extensive library of relevant MATLAB codes.&lt;br /&gt;
* Higham, D.J. and Higham, N.J. (2005) MATLAB Guide, Society for Industrial and Applied Mathematics [http://www.amazon.co.uk/MATLAB-Guide-Desmond-J-Higham/dp/0898715784/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1347377409&amp;amp;sr=1-1]&lt;br /&gt;
This website does not cover any theoretical ground and is no substitute for any Econometric Textbook. There is a wide range of very good Econometric Textbooks available. If you are concerned about programming in MATLAB than you are likely to appreciate textbooks that use matrix notation. Here are two very good books that fit that bill:&lt;br /&gt;
* Heij C., de Boer P., Franses P.H., Kloek T. and van Dijk H.K (2004) Econometric Methods with Applications in Business and Economics, Oxford University Press, New York.[http://www.amazon.co.uk/Econometric-Methods-Applications-Business-Economics/dp/0199268010/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1354473313&amp;amp;sr=1-1]&lt;br /&gt;
* Greene W.H. (2012) Econometric Analysis, Pearson, Harlow.[http://www.amazon.co.uk/Econometric-Analysis-William-H-Greene/dp/0273753568/ref=sr_1_1?ie=UTF8&amp;amp;qid=1354473593&amp;amp;sr=8-1]&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:US3monthRate.xlsx&amp;diff=4258</id>
		<title>File:US3monthRate.xlsx</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:US3monthRate.xlsx&amp;diff=4258"/>
				<updated>2021-03-20T15:31:59Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4257</id>
		<title>MATLAB</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=MATLAB&amp;diff=4257"/>
				<updated>2021-03-20T15:31:19Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /*  Special Econometric Topics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;lt;div id=&amp;quot;Essential&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;The Essential MATLAB Programming Techniques ==&lt;br /&gt;
&lt;br /&gt;
In this section we will introduce a number of basic and intermediate programming techniques. Whatever language you program in you will encounter these techniques, although the details will, of course, vary. We recommend that you ensure that you are familiar with these before you progress to [[#SpecEcmtrTopics| Special Econometric Topics ]].&lt;br /&gt;
&lt;br /&gt;
=== Basic Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Basics and&amp;lt;br&amp;gt;Matrices&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Loading Data and&amp;lt;br&amp;gt;Date Formats&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Program Flow and&amp;lt;br&amp;gt;Logicals&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Saving Data and&amp;lt;br&amp;gt;Screen Output&lt;br /&gt;
|-&lt;br /&gt;
| [[Discussion]] &amp;lt;br&amp;gt; [http://www.youtube.com/watch?v=av5MgVpybT0&amp;amp;feature=youtu.be&amp;amp;hd=1 Example Clip]&lt;br /&gt;
| [[LoadingData|Discussion]] &amp;lt;br/&amp;gt;[http://youtu.be/jyb68zGM2ik?hd=1 ExampleClip]&lt;br /&gt;
| [[Program Flow and Logicals|Discussion]]&lt;br /&gt;
| [[Function|Discussion]] &amp;lt;br/&amp;gt; [[FctExampleCode|Example Code]] &amp;lt;br/&amp;gt; [[media:OLSexample.xls|OLSexample.xls]] &amp;lt;br&amp;gt; [http://youtu.be/FPw9DH8pfiU?hd=1 Example Clip]&lt;br /&gt;
| [[SavingData|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
After having gone through these basic techniques you may want to test your newly acquired skills with the following examples.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 1&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Example 2&lt;br /&gt;
|-&lt;br /&gt;
| [[Example 1]]&lt;br /&gt;
| [[Example 2|Example2a]]&amp;lt;br&amp;gt;[[Example 2b|Example2b]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Intermediate Programming ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Statistical&amp;lt;br&amp;gt;Functions&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Arrays and&amp;lt;br&amp;gt;Structures&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Debugging&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Graphing Data&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Function Handlers&amp;lt;br&amp;gt; Anonymous Functions&lt;br /&gt;
|-&lt;br /&gt;
| [[StatFunct|Discussion]]&lt;br /&gt;
| [[ArrayStructures|Discussion]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Graphing|Discussion]]&lt;br /&gt;
| [[Anonym|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Advanced Programming ===&lt;br /&gt;
&lt;br /&gt;
Sorry, but this cannot be taught! It will come with experience. Find someone who has experience in MATLAB programming and let him or her look over your code.&lt;br /&gt;
&lt;br /&gt;
== Nonlinear Optimisation ==&lt;br /&gt;
&lt;br /&gt;
The optimal parameters in a linear econometric model (assuming certain assumptions) can be found analytically. We call them the Ordinary Least Squares (OLS) estimates and they are easily calculated with a certain formula (see the [[FctExampleCode#OLSestm|OLSest.m]] function). When econometric models do not have such an analytical solution, an alternative parameter estimation strategy is required. In essence it is a clever &amp;quot;trial and error&amp;quot; strategy. This is often called nonlinear optimisation.&lt;br /&gt;
&lt;br /&gt;
Nonlinear optimisation is a very important, but also a very tricky area of econometric computing. It certainly helps to understand some of the underlying theory and therefore we have below separate sections on the theory and implementation.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Theory&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Implementation&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Constrained &amp;lt;br&amp;gt;Optimisation&lt;br /&gt;
|-&lt;br /&gt;
| [[NonlinOptTheory| Discussion]]&lt;br /&gt;
| [[NonlinOptImp| Discussion]]&lt;br /&gt;
| [[ConNonlinOptImp| Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;div id=&amp;quot;SpecEcmtrTopics&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; Special Econometric Topics ==&lt;br /&gt;
&lt;br /&gt;
Topics in this Section will assume that you have mastered all the techniques covered in the [[#Essential| Essential Programming Section ]]&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Robust standard&amp;lt;br&amp;gt;errors&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Univariate&amp;lt;br&amp;gt;Time Series&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Unit Root and&amp;lt;br&amp;gt;Stationarity Testing&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Forecasting&lt;br /&gt;
|-&lt;br /&gt;
| [[RobInf|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeOLShac|Example Code]]&lt;br /&gt;
| [[UniTS|Discussion]]&amp;lt;br&amp;gt;[[media:FXrateUSEU.xls|FXrateUSEU.xls]]&amp;lt;br&amp;gt;[[media:USGDP.xls|USGDP.xls]]&lt;br /&gt;
| coming soon&lt;br /&gt;
| [[Forecasting|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Maximum&amp;lt;br&amp;gt;Likelihood&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Generalized&amp;lt;br&amp;gt;Methods of Moments&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Instrumental&amp;lt;br&amp;gt;Variables&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Bayesian&amp;lt;br&amp;gt;Estimation&lt;br /&gt;
|-&lt;br /&gt;
| [[MaxLik|Discussion]]&amp;lt;br&amp;gt;[[MaxLikCode|Example Code]]&lt;br /&gt;
| Video &amp;lt;br&amp;gt;[[media:US3monthRate.xlsx|US3monthRate.xlsx]]&amp;lt;br&amp;gt;[[media:gmm_level_RB_2stage.m|gmm_level_RB_2stage.m]]&amp;lt;br&amp;gt;[[media: gradp.m|gradp.m]]&lt;br /&gt;
| [[IV|Discussion]]&amp;lt;br&amp;gt;[[ExampleCodeIV|Example Code]]&lt;br /&gt;
| [[Bayes|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Monte-Carlo/&amp;lt;br&amp;gt;Simulation Techniques&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Binary Response&amp;lt;br&amp;gt;Models&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Handling High&amp;lt;br&amp;gt;Frequency Data&lt;br /&gt;
|-&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
| coming soon&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Other useful MATLAB resources ==&lt;br /&gt;
&lt;br /&gt;
=== The MATLAB Software ===&lt;br /&gt;
&lt;br /&gt;
The software is available on University of Manchester Computer Labs. If you make regular use of MATLAB you should consider purchasing your own software. The Student Version of MATLAB is available, for instance, from [http://www.amazon.co.uk/MATLAB-Simulink-Student-Version-R2014a/dp/0989614026/ref=sr_1_1?s=software&amp;amp;ie=UTF8&amp;amp;qid=1411983990&amp;amp;sr=1-1&amp;amp;keywords=matlab+2014 Amazon] for £66. This is a real bargain, considering that the equivalent non-discounted package would come in at about £4,000.&lt;br /&gt;
&lt;br /&gt;
=== Freely available toolboxes ===&lt;br /&gt;
&lt;br /&gt;
The following toolboxes are freely available and contain extremely useful procedures&lt;br /&gt;
&lt;br /&gt;
* Spatial Econometrics by James P. LeSage [http://www.spatial-econometrics.com/]. This toolbox contains a wide variety of useful econometrics functions. It also contains an excellent documentation. In addition to quite general econometric functions you will, as the name suggests, find a huge list of functions relevant if you are working with spatial data.&lt;br /&gt;
* &amp;lt;div id=&amp;quot;MFEtoolbox&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;Oxford MFE toolbox by Kevin Sheppard [https://bitbucket.org/kevinsheppard/mfe_toolbox]. Use the download link in the box on the right that starts with &amp;quot;Owner: Kevin Sheppard&amp;quot;. This toolbox contains many useful functions for uni- and multivariate volatility models.&lt;br /&gt;
&lt;br /&gt;
You need to copy these toolboxes into your MATLAB toolbox folder and add the respective path to the MATLAB list of folders it searches for functions. (In the main menu select FILE and then SET PATH where you can add the folders you added.) If you work on a computer for which you have no administrator rights, this strategy may not work. This [http://youtu.be/_32OqcW9WoY?hd=1 Example Clip] demonstrates what to do in that case. It is just a matter of adding one line into your code! Piece of cake.&lt;br /&gt;
&lt;br /&gt;
=== Literature and other learning resources ===&lt;br /&gt;
* [http://www.kevinsheppard.com/wiki/MFE_Toolbox: Kevin Sheppard&amp;#039;s MATLAB introduction].&lt;br /&gt;
* Martin V., Hurn S. and Harris D. (2012) Econometric Modelling with Time Series: Specification, Estimation and Testing (Themes in Modern Econometrics).[http://www.amazon.co.uk/Econometric-Modelling-Time-Specification-Econometrics/dp/0521196604/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1345214275&amp;amp;sr=1-1] This book contains an extensive library of relevant MATLAB codes.&lt;br /&gt;
* Higham, D.J. and Higham, N.J. (2005) MATLAB Guide, Society for Industrial and Applied Mathematics [http://www.amazon.co.uk/MATLAB-Guide-Desmond-J-Higham/dp/0898715784/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1347377409&amp;amp;sr=1-1]&lt;br /&gt;
This website does not cover any theoretical ground and is no substitute for any Econometric Textbook. There is a wide range of very good Econometric Textbooks available. If you are concerned about programming in MATLAB than you are likely to appreciate textbooks that use matrix notation. Here are two very good books that fit that bill:&lt;br /&gt;
* Heij C., de Boer P., Franses P.H., Kloek T. and van Dijk H.K (2004) Econometric Methods with Applications in Business and Economics, Oxford University Press, New York.[http://www.amazon.co.uk/Econometric-Methods-Applications-Business-Economics/dp/0199268010/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1354473313&amp;amp;sr=1-1]&lt;br /&gt;
* Greene W.H. (2012) Econometric Analysis, Pearson, Harlow.[http://www.amazon.co.uk/Econometric-Analysis-William-H-Greene/dp/0273753568/ref=sr_1_1?ie=UTF8&amp;amp;qid=1354473593&amp;amp;sr=8-1]&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Confidence_Intervals&amp;diff=4256</id>
		<title>Confidence Intervals</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Confidence_Intervals&amp;diff=4256"/>
				<updated>2020-04-20T15:56:41Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /* Sampling Variability */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
= Point and Interval Estimation =&lt;br /&gt;
&lt;br /&gt;
In [[Point_Estimation|Section]], it was noted that an estimate of a population parameter is a single number. Bearing in mind the fact that in general, the values of population parameters are unknown, it is easy to fall into the trap of treating the estimated value as if it were actually the “true” population value. After all, the estimate is derived from a single sample out of all the possible samples that might be drawn. Different samples will yield different estimates of the population parameter: this is the idea of &amp;#039;&amp;#039;&amp;#039;sampling variability&amp;#039;&amp;#039;&amp;#039;. One can see the usefulness of obtaining from a &amp;#039;&amp;#039;&amp;#039;single sample&amp;#039;&amp;#039;&amp;#039;, some idea of the range of values of the estimate that might be obtained in different samples. This is the purpose of an &amp;#039;&amp;#039;interval estimate&amp;#039;&amp;#039;. There is an alternative and more popular name, &amp;#039;&amp;#039;confidence interval&amp;#039;&amp;#039;. Initially this name is not used because it does not permit the distinction between an &amp;#039;&amp;#039;&amp;#039;interval estimator &amp;#039;&amp;#039;&amp;#039;and an &amp;#039;&amp;#039;&amp;#039;interval estimate.&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
== Sampling Variability ==&lt;br /&gt;
&lt;br /&gt;
Consider the simplest case of sampling from a normal distribution, &amp;lt;math&amp;gt;X\sim N\left( \mu ,\sigma ^{2}\right) &amp;lt;/math&amp;gt; with the intention of estimating &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt;. The obvious estimator is the sample mean, &amp;lt;math&amp;gt;\bar{X},&amp;lt;/math&amp;gt; with sampling distribution &amp;lt;math&amp;gt;\bar{X}\sim N\left( \mu ,\sigma^{2}/n\right)&amp;lt;/math&amp;gt; (see this [[Statistics_SamplingDistributions#The_Sampling_Distribution|section]] for details). Here, &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; is unknown: one issue will be whether the population parameter &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; is also unknown. From the general principle that population parameters are unknown, the answer should be “yes”, but it will be convenient to (initially) assume, for simplicity, that &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; is actually known.&lt;br /&gt;
&lt;br /&gt;
The variance of the distribution of &amp;lt;math&amp;gt;\bar{X}&amp;lt;/math&amp;gt; measures the dispersion of this distribution, and thus gives some idea of the range of values of &amp;lt;math&amp;gt;\bar{X}&amp;lt;/math&amp;gt; that might be obtained in drawing different samples of size &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;. The question is, though, what is an extreme or untypical value of &amp;lt;math&amp;gt;\bar{X}&amp;lt;/math&amp;gt;? The conventional way to define this is by using a multiple of the &amp;#039;&amp;#039;&amp;#039;standard error &amp;#039;&amp;#039;&amp;#039;of &amp;lt;math&amp;gt;\bar{X},&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;SE\left( \bar{X}\right)&amp;lt;/math&amp;gt;, as a measure of sampling variability. Then, extreme values do not belong to the interval&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\bar{x}\pm k SE\left( \bar{X}\right) ,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
with the value &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; chosen suitably. Then, the factor &amp;lt;math&amp;gt;\pm k SE \left( \bar{X}\right) &amp;lt;/math&amp;gt; is the measure of sampling variability around &amp;lt;math&amp;gt;\bar{x}&amp;lt;/math&amp;gt;. Clearly, the parameter &amp;lt;math&amp;gt;SE\left( \bar{X}\right) &amp;lt;/math&amp;gt; has to be known in order for the measure of ‘sampling variability to be computed. This is why we make the initial assumption of a known &amp;lt;math&amp;gt;\sigma&amp;lt;/math&amp;gt;, such that we can easily calculate &amp;lt;math&amp;gt;SE\left( \bar{X}\right) = \sqrt(\sigma^2 / n)&amp;lt;/math&amp;gt;. As you can see this measure partially reflects the inherent variability in the population, as represented by the population variance &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt;. &amp;lt;ref&amp;gt;Usually &amp;lt;math&amp;gt;\sigma^2&amp;lt;/math&amp;gt; is, as &amp;lt;math&amp;gt;\mu&amp;lt;/math&amp;gt;, unknown and it has to be estimated by using the sample variance &amp;lt;math&amp;gt;s^{2}&amp;lt;/math&amp;gt; instead. It is via the use of &amp;lt;math&amp;gt;s^{2}&amp;lt;/math&amp;gt; that the measure of sampling variability is being calculated from a single sample. This will be discussed below.&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How should &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; be chosen? A conventional value is &amp;lt;math&amp;gt;k=2:&amp;lt;/math&amp;gt; why this might be popular will be seen shortly.&lt;br /&gt;
&lt;br /&gt;
To illustrate the reasoning here, we use an example that also appears later in a later Section. Suppose that a random sample of size &amp;lt;math&amp;gt;50&amp;lt;/math&amp;gt; is drawn from the distribution of household incomes, where the latter is supposed to be &amp;lt;math&amp;gt;N\left( \mu ,5\right) &amp;lt;/math&amp;gt;, and that the mean of the sample is &amp;lt;math&amp;gt;\bar{x}=18&amp;lt;/math&amp;gt;. If we choose &amp;lt;math&amp;gt;k=2&amp;lt;/math&amp;gt;, the measure of sampling variability is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\pm 2SE\left( \bar{X}\right) =\pm \left( 2\right) \sqrt{\frac{5}{50}}=\pm 0.6325,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is, relative to &amp;lt;math&amp;gt;SE(X) = \sqrt{5}&amp;lt;/math&amp;gt;, rather small.&lt;br /&gt;
&lt;br /&gt;
= Interval Estimators =&lt;br /&gt;
&lt;br /&gt;
It is simplest to see how an interval estimator is constructed within the context of estimating the mean &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; of a normal distribution &amp;lt;math&amp;gt;N\left( \mu ,\sigma ^{2}\right) &amp;lt;/math&amp;gt; using the sample mean &amp;lt;math&amp;gt;\bar{X}&amp;lt;/math&amp;gt; of a random sample of size &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;. An interval has to have two endpoints, and an estimator is a random variable, so we seek two random variables &amp;lt;math&amp;gt;C_{L}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C_{U}&amp;lt;/math&amp;gt; such that the closed interval &amp;lt;math&amp;gt;\left[ C_{L},C_{U}\right] &amp;lt;/math&amp;gt; contains the parameter &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; with a pre-specified probability. Rather obviously, this is a &amp;#039;&amp;#039;&amp;#039;random interval&amp;#039;&amp;#039;&amp;#039; because its endpoints are random variables.&lt;br /&gt;
&lt;br /&gt;
This interval estimator &amp;lt;math&amp;gt;\left[ C_{L},C_{U}\right] &amp;lt;/math&amp;gt; is also called a &amp;#039;&amp;#039;&amp;#039;confidence interval&amp;#039;&amp;#039;&amp;#039;, and the pre-specified probability is called the &amp;#039;&amp;#039;&amp;#039;confidence coefficient&amp;#039;&amp;#039;&amp;#039; or &amp;#039;&amp;#039;&amp;#039;confidence level&amp;#039;&amp;#039;&amp;#039;. The corresponding interval estimate is then the sample value of this random interval: &amp;lt;math&amp;gt;\left[ c_{L},c_{U}\right] &amp;lt;/math&amp;gt;. The sample values &amp;lt;math&amp;gt;c_{L},c_{U}&amp;lt;/math&amp;gt; are called the &amp;#039;&amp;#039;&amp;#039;lower &amp;#039;&amp;#039;&amp;#039;and &amp;#039;&amp;#039;&amp;#039;upper confidence bounds&amp;#039;&amp;#039;&amp;#039; or &amp;#039;&amp;#039;&amp;#039;limits&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Construction of the Interval Estimator ==&lt;br /&gt;
&lt;br /&gt;
From the sampling distribution of &amp;lt;math&amp;gt;\bar{X}&amp;lt;/math&amp;gt;, for any given value &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; we can find the probability that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( -k\leqslant \dfrac{\bar{X}-\mu }{SE\left( \bar{X}\right) }\leqslant k\right)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( -k\leqslant \dfrac{\bar{X}-\mu }{SE\left( \bar{X}\right) }\leqslant k\right) =\Pr \left(Z\leqslant k\right) -\Pr \left(Z\leqslant -k\right)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
for &amp;lt;math&amp;gt;Z\thicksim N\left( 0,1\right) &amp;lt;/math&amp;gt;, just as in this [[Point_Estimation#How_close_is_.5C.28.5Cbar.7BX.7D.5C.29_to_.5C.28.5Cmu.5C.29.3F|Section]]. So, if we choose &amp;lt;math&amp;gt;k=1.96&amp;lt;/math&amp;gt;, and consult the Normal Probability [[media:NormalTable.pdf|table]], we find that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( -1.96\leqslant \dfrac{\bar{X}-\mu }{SE\left( \bar{X}\right) }\leqslant 1.96\right) =0.95.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By manipulating the &amp;#039;&amp;#039;&amp;#039;two&amp;#039;&amp;#039;&amp;#039; inequalities inside the brackets, but &amp;#039;&amp;#039;&amp;#039;without&amp;#039;&amp;#039;&amp;#039; changing the truth content of the inequalities, we can rewrite this so that the centre of the inequalities is the unknown parameter &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt;. The sequence is to&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;multiply across by &amp;lt;math&amp;gt;SE\left( \bar{X}\right) &amp;lt;/math&amp;gt; to give&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\Pr \left\{ -1.96SE\left( \bar{X}\right) \leqslant \bar{X}-\mu&lt;br /&gt;
\leqslant 1.96SE\left( \bar{X}\right) \right\} =0.95;&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;move &amp;lt;math&amp;gt;\bar{X}&amp;lt;/math&amp;gt; from the centre:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\Pr \left\{ -\bar{X}-1.96SE\left( \bar{X}\right) \leqslant -\mu&lt;br /&gt;
\leqslant -\bar{X}+1.96SE\left( \bar{X}\right) \right\} =0.95;&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;multiply through by &amp;lt;math&amp;gt;-1:&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\Pr \left\{ \bar{X}+1.96SE\left( \bar{X}\right) \geqslant \mu&lt;br /&gt;
\geqslant \bar{X}-1.96SE\left( \bar{X}\right) \right\} =0.95;&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;tidy up:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\Pr \left\{ \bar{X}-1.96SE\left( \bar{X}\right) \leqslant \mu&lt;br /&gt;
\leqslant \bar{X}+1.96SE\left( \bar{X}\right) \right\} =0.95.&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notice that because the manipulations do not change the truth content of the inequalities, the probability of the &amp;lt;math&amp;gt;\bar{X}&amp;lt;/math&amp;gt; event defined by the inequalities is not changed.&lt;br /&gt;
&lt;br /&gt;
If we identify the endpoints &amp;lt;math&amp;gt;C_{L},C_{U}&amp;lt;/math&amp;gt; of the interval estimator with the endpoints of the interval in part (4),&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;C_{L}=\bar{X}-1.96SE\left( \bar{X}\right) ,\;\;\;\;\;C_{U}=\bar{X}+1.96SE\left( \bar{X}\right) ,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
we will have constructed a random interval with the desired properties:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( C_{L}\leqslant \mu \leqslant C_{U}\right) =0.95.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we can also see why we previously stated that the sampling variability is often expressed as &amp;lt;math&amp;gt;\pm 2SE\left( \bar{X}\right)&amp;lt;/math&amp;gt;. As 2 is pretty close to the above value of 1.96, this will deliver an approximate 95% confidence interval.&lt;br /&gt;
&lt;br /&gt;
An alternative expression for this confidence interval uses membership of the interval:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( \mu \in \left[ C_{L},C_{U}\right] \right) =0.95.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In both expressions, &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; is fixed and unknown. It is the random variables &amp;lt;math&amp;gt;C_{L}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C_{U}&amp;lt;/math&amp;gt; which supply the chance behaviour, giving the possibility that &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\notin \left[ C_{L},C_{U}\right] &amp;lt;/math&amp;gt; with some non-zero probability. Indeed, &amp;#039;&amp;#039;&amp;#039;by construction, &amp;#039;&amp;#039;&amp;#039;the interval estimator &amp;#039;&amp;#039;&amp;#039;fails &amp;#039;&amp;#039;&amp;#039;to contain (more strictly, &amp;#039;&amp;#039;&amp;#039;cover&amp;#039;&amp;#039;&amp;#039;) the unknown value &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;0.05&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( \mu \notin \left[ C_{L},C_{U}\right] \right) =0.05.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To summarise, in the standard jargon:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;a 95% confidence interval for &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; is given by the random interval or interval estimator&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\left[ \bar{X}-1.96SE\left( \bar{X}\right) ,\;\bar{X}+1.96 SE\left( \bar{X}\right) \right]&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== The Interval Estimate ==&lt;br /&gt;
&lt;br /&gt;
This interval is defined by the sample values of &amp;lt;math&amp;gt;C_{L}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C_{U}:&amp;lt;/math&amp;gt; these are the lower and upper confidence bounds&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;c_{L}=\bar{x}-1.96SE\left( \bar{X}\right) ,\;\;\;c_{U}=\bar{x}+1.96 SE\left( \bar{X}\right) .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The interval estimate&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\bar{x}\pm 1.96SE\left( \bar{X}\right)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
contains the measure of sampling variability discussed in the Sampling Variability Section above. The choice of &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;k=1.96&amp;lt;/math&amp;gt; is now determined by the desired confidence level. Why choose the latter to be &amp;lt;math&amp;gt;0.95&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;95\%?&amp;lt;/math&amp;gt; This is really a matter of convention.&lt;br /&gt;
&lt;br /&gt;
It is a common abuse of language to call the interval&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\bar{x}\pm 1.96SE\left( \bar{X}\right)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;the&amp;#039;&amp;#039;&amp;#039; confidence interval - indeed it is so common that this abuse will be allowed. Strictly, this is an interval estimate, which is now seen to be a combination of a point estimate, &amp;lt;math&amp;gt;\bar{x}&amp;lt;/math&amp;gt;, of &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt;, and a measure of sampling variability determined by the constant &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;, which sets the confidence coefficient or level, in this case, 0.95.&lt;br /&gt;
&lt;br /&gt;
There is a common misinterpretation of a confidence interval, based on this abuse of language, which says that &amp;#039;&amp;#039;the confidence interval &amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\bar{x}\pm 1.96SE\left( \bar{X}\right)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;contains &amp;#039;&amp;#039;&amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; &amp;#039;&amp;#039;with&amp;#039;&amp;#039; &amp;lt;math&amp;gt;95\%&amp;lt;/math&amp;gt; &amp;#039;&amp;#039;confidence.&amp;#039;&amp;#039; Why is this a misinterpretation? For the following reasons:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; is unknown&amp;lt;math&amp;gt;;&amp;lt;/math&amp;gt;&lt;br /&gt;
* so this “confidence interval” may or may not contain &amp;lt;math&amp;gt;\mu&amp;lt;/math&amp;gt;;&lt;br /&gt;
* since &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; is unknown, we will &amp;#039;&amp;#039;&amp;#039;never&amp;#039;&amp;#039;&amp;#039; know which is true;&lt;br /&gt;
* the “confidence level” is either &amp;lt;math&amp;gt;0&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;, not &amp;lt;math&amp;gt;0.95&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A better interpretation is based on the relative frequency interpretation of probability. If samples are repeatedly drawn from the population, say &amp;lt;math&amp;gt;X\thicksim N\left( \mu ,\sigma ^{2}\right) &amp;lt;/math&amp;gt;, and the interval estimates (“confidence interval”) for a &amp;lt;math&amp;gt;95\%&amp;lt;/math&amp;gt; confidence level calculated for each sample, about &amp;lt;math&amp;gt;95\%&amp;lt;/math&amp;gt; of them will contain &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt;. However, this doesn’t help when only a single sample is drawn. In any case, this interpretation is only a relative frequency restatement of the principle behind the construction of the interval estimator.&lt;br /&gt;
&lt;br /&gt;
Ultimately, we have to abandon interpretations like this and return to the idea of obtaining from a single sample, a point estimate of a population parameter and a measure of sampling variability. An interval estimate (“confidence interval”) does precisely this, in a specific way.&lt;br /&gt;
&lt;br /&gt;
== An Example ==&lt;br /&gt;
&lt;br /&gt;
As in the Sampling Variability Section above, suppose that a random sample of size &amp;lt;math&amp;gt;50&amp;lt;/math&amp;gt; is drawn from the distribution of household incomes, where the latter is supposed to be &amp;lt;math&amp;gt;N\left( \mu ,5\right)&amp;lt;/math&amp;gt;. Notice that &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; here is supposed to be &amp;#039;&amp;#039;&amp;#039;known&amp;#039;&amp;#039;&amp;#039; to equal &amp;lt;math&amp;gt;5&amp;lt;/math&amp;gt;. Suppose that the mean of the sample is &amp;lt;math&amp;gt;\bar{x}=18&amp;lt;/math&amp;gt;. Then, the &amp;lt;math&amp;gt;95\%&amp;lt;/math&amp;gt; confidence interval for &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; is (allowing the abuse of language)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\bar{x}\pm 1.96SE\left( \bar{X}\right) &amp;amp;=&amp;amp;18\pm 1.96\sqrt{\dfrac{5}{50}} \\&lt;br /&gt;
&amp;amp;=&amp;amp;18\pm 0.62 \\&lt;br /&gt;
&amp;amp;=&amp;amp;\left[ 17.38,18.62\right] .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here the measure of sampling variability around &amp;lt;math&amp;gt;\bar{x}=18&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\pm 0.62&amp;lt;/math&amp;gt;. One might reasonably conclude that since this measure of sampling variability is small compared to &amp;lt;math&amp;gt;\bar{x}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\bar{x}=18&amp;lt;/math&amp;gt; is a relatively precise estimate of &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt;. To refer to &amp;#039;&amp;#039;precision &amp;#039;&amp;#039;here is fair, since we are utilising the variance of the sampling distribution of &amp;lt;math&amp;gt;\bar{X}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Other Confidence Levels ==&lt;br /&gt;
&lt;br /&gt;
Instead of choosing &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; so that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( -k\leqslant \dfrac{\bar{X}-\mu }{SE\left( \bar{X}\right) }\leqslant k\right) =0.95,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
we choose it to deliver the desired probability, usually expressed as &amp;lt;math&amp;gt;1-\alpha &amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( -k\leqslant \dfrac{\bar{X}-\mu }{SE\left( \bar{X}\right) }\leqslant k\right) =1-\alpha .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reason for the use of &amp;lt;math&amp;gt;1-\alpha &amp;lt;/math&amp;gt; is explained later. Since&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;Z=\dfrac{\bar{X}-\mu }{SE\left( \bar{X}\right) }\thicksim N\left(0,1\right) ,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
we can find from the [[media:NormalTable.pdf|tables]] of the standard normal distribution the value (&amp;#039;&amp;#039;percentage point&amp;#039;&amp;#039;) &amp;lt;math&amp;gt;z_{\alpha /2}&amp;lt;/math&amp;gt; such that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( Z&amp;gt;z_{\alpha /2}\right) =\dfrac{\alpha }{2}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This implies that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( -z_{\alpha /2}\leqslant Z\leqslant z_{\alpha /2}\right) =1-\alpha.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is clear from the familiar picture in the Figure below.&lt;br /&gt;
&lt;br /&gt;
[[File:norm01.jpg|frameless|500px]]&lt;br /&gt;
&lt;br /&gt;
To find a confidence interval for &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; with confidence level &amp;lt;math&amp;gt;1-\alpha &amp;lt;/math&amp;gt;, or equivalently, &amp;lt;math&amp;gt;100\left( 1-\alpha \right) \%&amp;lt;/math&amp;gt;, we can follow through the derivation in section above in which we derived the construction of the interval estimator, replacing &amp;lt;math&amp;gt;1.96&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;z_{\alpha /2}&amp;lt;/math&amp;gt; to give&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left\{ \bar{X}-z_{\alpha /2}SE\left( \bar{X}\right) \leqslant \mu \leqslant \bar{X}+z_{\alpha /2}SE\left( \bar{X}\right) \right\} =1-\alpha .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That is, the random variables &amp;lt;math&amp;gt;C_{L}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C_{U}&amp;lt;/math&amp;gt; defining the interval estimator are&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;C_{L}=\bar{X}-z_{\alpha /2}SE\left( \bar{X}\right) ,\;\;\;C_{U}=\bar{X}+z_{\alpha /2}SE\left( \bar{X}\right)&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The sample value of this confidence interval (the confidence interval) is then&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\bar{x}\pm z_{\alpha /2}SE\left( \bar{X}\right) .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using the example of the previous section, we can calculate, for example, a &amp;lt;math&amp;gt;99\%&amp;lt;/math&amp;gt; confidence interval for &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt;. Here,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;1-\alpha =0.99,\;\;\;\alpha =0.01,\;\;\;\alpha /2=0.005,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and then from tables,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;z_{\alpha /2}=2.58.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives the lower and upper confidence bounds as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[ c_{L},c_{U}\right] &amp;amp;=&amp;amp;\bar{x}\pm z_{\alpha /2}SE\left( \bar{X}\right) \\&lt;br /&gt;
&amp;amp;=&amp;amp;18\pm 2.58\sqrt{\dfrac{5}{50}} \\&lt;br /&gt;
&amp;amp;=&amp;amp;18\pm 0.82 \\&lt;br /&gt;
&amp;amp;=&amp;amp;\left[ 17.18,18.82\right] .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notice that the measure of sampling variability has increased from &amp;lt;math&amp;gt;0.62&amp;lt;/math&amp;gt; for a &amp;lt;math&amp;gt;95\%&amp;lt;/math&amp;gt; confidence interval to &amp;lt;math&amp;gt;0.82&amp;lt;/math&amp;gt; for a &amp;lt;math&amp;gt;99\%&amp;lt;/math&amp;gt; confidence interval. This illustrates the general proposition that the confidence interval gets wider as the confidence coefficient is increased. There has been no change in the precision of estimation here.&lt;br /&gt;
&lt;br /&gt;
Why the use of &amp;lt;math&amp;gt;1-\alpha &amp;lt;/math&amp;gt; in the probability statement underlying the confidence interval? The random variables &amp;lt;math&amp;gt;C_{L}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C_{U}&amp;lt;/math&amp;gt; are designed here to make&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( \mu \in \left[ C_{L},C_{U}\right] \right) =1-\alpha&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and therefore&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( \mu \notin \left[ C_{L},C_{U}\right] \right) =\alpha .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This probability that &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; does &amp;#039;&amp;#039;&amp;#039;not &amp;#039;&amp;#039;&amp;#039;belong to the confidence interval turns out to be very important for the topic of &amp;#039;&amp;#039;hypothesis testing&amp;#039;&amp;#039; which will be discussed in Section [htsect]. As a result, &amp;lt;math&amp;gt;\alpha &amp;lt;/math&amp;gt; is considered to be “important”, and the confidence coefficient is then stated in terms of &amp;lt;math&amp;gt;\alpha &amp;lt;/math&amp;gt;. Again, this is largely due to convention.&lt;br /&gt;
&lt;br /&gt;
== A small table of percentage points ==&lt;br /&gt;
&lt;br /&gt;
For the &amp;lt;math&amp;gt;N\left( 0,1\right) &amp;lt;/math&amp;gt; distribution, it is possible in principle to find the appropriate &amp;lt;math&amp;gt;z_{\alpha }&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;z_{\alpha /2}&amp;lt;/math&amp;gt; from the [[media:NormalTable.pdf|table]] of the standard normal distribution. But, this soon becomes tiresome. The table below gives &amp;lt;math&amp;gt;z_{\alpha /2}&amp;lt;/math&amp;gt; to four decimal places for a range of common confidence levels. With some care, it can be used for &amp;lt;math&amp;gt;z_{\alpha }&amp;lt;/math&amp;gt; as well - this will be useful for later work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table border=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr class=&amp;quot;header&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;1-\alpha &amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;\alpha &amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;\alpha /2&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;z_{\alpha /2}&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr class=&amp;quot;odd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.80&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.2&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.10&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;1.2816&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr class=&amp;quot;even&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.90&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.1&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.05&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;1.6449&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr class=&amp;quot;odd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.95&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.05&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.025&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;1.9600&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr class=&amp;quot;even&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.98&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.02&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.01&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;2.3263&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr class=&amp;quot;odd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.99&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.01&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.005&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;2.5758&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Figure below shows the notation graphically.&lt;br /&gt;
&lt;br /&gt;
[[File:normsinv.jpg|frameless|500px]]&lt;br /&gt;
&lt;br /&gt;
== A small but important point ==&lt;br /&gt;
&lt;br /&gt;
We have assumed that a random sample of size &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; has been drawn from a normal population, where &amp;lt;math&amp;gt;X\thicksim N\left( \mu ,\sigma ^{2}\right)&amp;lt;/math&amp;gt;, and it is clear that an important role in a confidence interval for &amp;lt;math&amp;gt;\mu&amp;lt;/math&amp;gt; is played by&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;SE\left( \bar{X}\right) =\sqrt{\dfrac{\sigma ^{2}}{n}}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This standard error has to be known in order to calculate the interval estimate. But, it has frequently been emphasised that population parameters are in general unknown. So, &amp;#039;&amp;#039;&amp;#039;assuming&amp;#039;&amp;#039;&amp;#039; that &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; &amp;#039;&amp;#039;&amp;#039;is known&amp;#039;&amp;#039;&amp;#039; has to be seen as an unrealistic but simplifying assumption. This assumption allows us to see the principles behind the construction of an interval estimator or confidence interval without other complications. We shall now have to investigate the consequences of relaxing this assumption.&lt;br /&gt;
&lt;br /&gt;
=== Additional Resources ===&lt;br /&gt;
&lt;br /&gt;
* Khan Academy on this type of confidence interval [https://www.khanacademy.org/math/probability/statistics-inferential/confidence-intervals/v/confidence-interval-1].&lt;br /&gt;
&lt;br /&gt;
= Unknown &amp;lt;math&amp;gt;\sigma&amp;lt;/math&amp;gt; and the &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; distribution =&lt;br /&gt;
&lt;br /&gt;
What happens if the parameter &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; is unknown, as is likely to be the case usually? Underlying the construction of a &amp;lt;math&amp;gt;95\%&amp;lt;/math&amp;gt; confidence interval for &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; is the sampling distribution for &amp;lt;math&amp;gt;\bar{X},\bar{X} \thicksim N\left( \mu ,\sigma^{2}/n\right) &amp;lt;/math&amp;gt; and a true probability statement,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( -1.96\leqslant \dfrac{\bar{X}-\mu }{\sqrt{\dfrac{\sigma ^{2}}{n}}}\leqslant 1.96\right) =0.95.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is still true when &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; is unknown, but it is of no help, since we cannot construct the confidence interval (i.e. interval estimate)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\bar{x}\pm 1.96\sqrt{\dfrac{\sigma ^{2}}{n}}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the light of the discussion starting in [[Point&amp;lt;sub&amp;gt;E&amp;lt;/sub&amp;gt;stimation|Section]] on the role of &amp;#039;&amp;#039;estimation&amp;#039;&amp;#039; in statistics, there is what seems to be an obvious solution. This is to replace the unknown &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; by an estimate, &amp;lt;math&amp;gt;s^{2}&amp;lt;/math&amp;gt;, derived from the same sample that was used to obtain &amp;lt;math&amp;gt;\bar{x}&amp;lt;/math&amp;gt;. However, one has to be a little careful. First, &amp;lt;math&amp;gt;s^{2}&amp;lt;/math&amp;gt; is the &amp;#039;&amp;#039;&amp;#039;estimate&amp;#039;&amp;#039;&amp;#039; of the (population) variance , and is the sample value of the &amp;#039;&amp;#039;&amp;#039;estimator &amp;#039;&amp;#039;&amp;#039;&amp;lt;math&amp;gt;S^{2}&amp;lt;/math&amp;gt;. The probability statement above is based on the fact that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;Z=\dfrac{\bar{X}-\mu }{\sqrt{\dfrac{\sigma ^{2}}{n}}}\thicksim N\left(0,1\right) ,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and in this one has to replace &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;S^{2}&amp;lt;/math&amp;gt;, not by &amp;lt;math&amp;gt;s^{2}&amp;lt;/math&amp;gt;. In effect, we are talking about the estimator and the estimate of &amp;lt;math&amp;gt;SE\left( \bar{X}\right) :&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* the estimator of &amp;lt;math&amp;gt;SE\left( \bar{X}\right) &amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\sqrt{\dfrac{S^{2}}{n}};&amp;lt;/math&amp;gt;&lt;br /&gt;
* the estimate of &amp;lt;math&amp;gt;SE\left( \bar{X}\right) &amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\sqrt{\dfrac{s^{2}}{n}}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Sometimes the estimator of &amp;lt;math&amp;gt;SE\left( \bar{X}\right) &amp;lt;/math&amp;gt; is denoted &amp;lt;math&amp;gt;\widehat{SE}\left( \bar{X}\right) &amp;lt;/math&amp;gt;, with estimate &amp;lt;math&amp;gt;\widehat{se}\left( \bar{X}\right) &amp;lt;/math&amp;gt;, but these are a bit clumsy to use in general.&lt;br /&gt;
&lt;br /&gt;
== Using &amp;lt;math&amp;gt;\widehat{SE}\left( \bar{X}\right)&amp;lt;/math&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
So, instead of using&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;Z=\dfrac{\bar{X}-\mu }{SE\left( \bar{X}\right) }\thicksim N\left(0,1\right) ,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
we should use&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;T=\dfrac{\bar{X}-\mu }{\widehat{SE}\left( \bar{X}\right) }.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This seems like a simple solution, but unfortunately it is not an innocuous solution. This is because the distribution of the random variable &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; combines two sources of randomness, &amp;lt;math&amp;gt;\bar{X}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S^{2}&amp;lt;/math&amp;gt;. As a result, the distribution of &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;&amp;#039;NOT&amp;#039;&amp;#039;&amp;#039; &amp;lt;math&amp;gt;N\left( 0,1\right) &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The distribution of &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; was discovered by a statistician called W.S. Gossett who worked at the Guinness Brewery in Dublin, and wrote under the pen name ‘Student’. The distribution of &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; is called &amp;#039;&amp;#039;Student&amp;#039;&amp;#039;‘&amp;#039;&amp;#039;s&amp;#039;&amp;#039; &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; &amp;#039;&amp;#039;distribution&amp;#039;&amp;#039;, or more commonly, just the &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; &amp;#039;&amp;#039;distribution&amp;#039;&amp;#039;. This distribution depends on a parameter, just like other distributions: here, the parameter is called the &amp;#039;&amp;#039;degrees of freedom&amp;#039;&amp;#039;. Before discussing the properties of this distribution, we summarise the distribution statement:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;in random sampling from &amp;lt;math&amp;gt;N\left( \mu ,\sigma ^{2}\right) &amp;lt;/math&amp;gt;,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;T=\dfrac{\bar{X}-\mu }{\widehat{SE}\left( \bar{X}\right) }\thicksim t_{n-1},&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;the &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; distribution with &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt; degrees of freedom.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The presence of &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt; degrees of freedom can be explained in a number of ways. One explanation is based on the expression for the estimator &amp;lt;math&amp;gt;S^{2}&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;S^{2}=\dfrac{1}{n-1}\sum\limits_{i=1}^{n}\left( X_{i}-\bar{X}\right) ^{2}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here, the divisor &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt; in this expression leads to the degrees of freedom for &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;. This is actually the main justification for using a divisor &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt; in a sample variance &amp;lt;math&amp;gt;S^{2}&amp;lt;/math&amp;gt; rather than &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;, although using &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt; also leads to an unbiased estimator.&lt;br /&gt;
&lt;br /&gt;
== Properties of the &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; distribution ==&lt;br /&gt;
&lt;br /&gt;
In general, the parameter &amp;lt;math&amp;gt;\nu &amp;lt;/math&amp;gt; of the &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; distribution is a positive real number, although in most applications, it is an integer, as here:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\nu =n-1.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Unlike many (population) parameters, this one has a known value once the sample size is known. The Figure below shows a plot of the &amp;lt;math&amp;gt;N\left(0,1\right) &amp;lt;/math&amp;gt; distribution, a &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; distribution with &amp;lt;math&amp;gt;\nu =2&amp;lt;/math&amp;gt; and a &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; distribution with &amp;lt;math&amp;gt;\nu =5&amp;lt;/math&amp;gt; degrees of freedom.&lt;br /&gt;
&lt;br /&gt;
[[File:tnorm.jpg|frameless|500px]]&lt;br /&gt;
&lt;br /&gt;
It can be seen from the Figure above that the &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; distribution is&lt;br /&gt;
&lt;br /&gt;
* symmetric about zero, like the &amp;lt;math&amp;gt;N\left( 0,1\right) &amp;lt;/math&amp;gt; distribution&lt;br /&gt;
* more dispersed than &amp;lt;math&amp;gt;N\left( 0,1\right) ;&amp;lt;/math&amp;gt;&lt;br /&gt;
* as &amp;lt;math&amp;gt;\nu &amp;lt;/math&amp;gt; increases, approaches the &amp;lt;math&amp;gt;N\left( 0,1\right) &amp;lt;/math&amp;gt; distribution.&lt;br /&gt;
&lt;br /&gt;
One can show that if&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;T\thicksim t_{\nu },&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;E\left[ T\right] =0,\;\;\;\;\;var\left[ T\right] =\dfrac{\nu }{\nu-2}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, the variance is only defined for &amp;lt;math&amp;gt;\nu \geqslant 2&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;var\left[ T\right] \geqslant 1,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which explains the extra dispersion of the &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; distribution relative to &amp;lt;math&amp;gt;N\left( 0,1\right) &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Comparing the &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;N\left( 0,1\right) &amp;lt;/math&amp;gt; distributions ==&lt;br /&gt;
&lt;br /&gt;
One way of doing this is to compare some “typical” probabilities. The difficulty with this is that one cannot produce a table of &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; distribution probabilities&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( T\leqslant t\right) \;\;\;\text{for }T\thicksim t_{\nu }&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to compare with those for&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( Z\leqslant z\right) \;\;\;\text{for }Z\thicksim N\left(0,1\right) :&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
there would have to be a table for each value of &amp;lt;math&amp;gt;\nu&amp;lt;/math&amp;gt;. If you need a precise probabilities form the t-distribution you need to use a software like EXCEL. The following table gives some numerical values:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table border=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr class=&amp;quot;header&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;\Pr \left( T\leqslant 1.96\right) &amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;\Pr \left( T&amp;gt;1.96\right) &amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;\Pr&lt;br /&gt;
\left( \left| T\right| &amp;gt;1.96\right) &amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr class=&amp;quot;odd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;t_{2}&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.9055&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.0945&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.1891&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr class=&amp;quot;even&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;t_{4}&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.9464&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.0536&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.1073&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr class=&amp;quot;odd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;t_{40}&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.9715&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.0285&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.0570&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr class=&amp;quot;even&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;t_{100}&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.9736&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.0264&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.0528&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr class=&amp;quot;odd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;N\left( 0,1\right) &amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.9750&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.0250&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.0500&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that the tail probabilities for the &amp;lt;math&amp;gt;t_{\nu }&amp;lt;/math&amp;gt; distribution actually approach those of the &amp;lt;math&amp;gt;N\left( 0,1\right) &amp;lt;/math&amp;gt; distribution as &amp;lt;math&amp;gt;\nu &amp;lt;/math&amp;gt; increases, although the rate of convergence is quite slow in fact. Conventionally, one treats &amp;lt;math&amp;gt;N\left( 0,1\right) &amp;lt;/math&amp;gt; as if it were &amp;lt;math&amp;gt;t_{\infty }&amp;lt;/math&amp;gt;, as in the tables in the Appendix to this book.&lt;br /&gt;
&lt;br /&gt;
An alternative comparison is in terms of &amp;#039;&amp;#039;percentage points&amp;#039;&amp;#039; - values &amp;lt;math&amp;gt;t &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;z&amp;lt;/math&amp;gt; such that, for example,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\Pr \left( T\leqslant t\right) &amp;amp;=&amp;amp;0.975\;\;\;\text{for\ \ \ }T\thicksim t_{\nu }, \\&lt;br /&gt;
\Pr \left( Z\leqslant z\right) &amp;amp;=&amp;amp;0.975\;\;\;\text{for\ \ \ }Z\thicksim N\left( 0,1\right) .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More generally, &amp;lt;math&amp;gt;z_{\alpha }&amp;lt;/math&amp;gt; is the percentage point such that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( Z&amp;gt;z_{\alpha }\right) =\alpha ,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and, for the &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; distribution, &amp;lt;math&amp;gt;t_{\nu ,\alpha }&amp;lt;/math&amp;gt; is the percentage point such that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( T&amp;gt;t_{\nu ,\alpha }\right) =\alpha .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This [[media:TTable.pdf|table]] gives values of &amp;lt;math&amp;gt;t_{\nu ,\alpha }&amp;lt;/math&amp;gt; for various combinations of &amp;lt;math&amp;gt;\nu &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha &amp;lt;/math&amp;gt; such that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( T\leqslant t_{\nu ,\alpha }\right) =1-\alpha&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;lt;math&amp;gt;\alpha&amp;lt;/math&amp;gt; is the value stated in the row labeled &amp;amp;quot;1-tailed&amp;amp;quot;. &amp;lt;ref&amp;gt;The meaning of 1-tailed and indeed 2-tailed will become obvious from the sections on hypothesis testing.&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The table below shows some of these values:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table border=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr class=&amp;quot;odd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;Appendix&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;#039;&amp;#039;Other Texts&amp;#039;&amp;#039;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;code&amp;gt;tinv(0.05,df)&amp;lt;/code&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr class=&amp;quot;even&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;\Pr \left( T\leqslant t_{\nu ,0.025}\right) &amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;\Pr \left( T&amp;gt;t_{\nu&lt;br /&gt;
,0.025}\right) &amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;\Pr \left( \left\vert T\right\vert &amp;gt;t_{\nu&lt;br /&gt;
,0.025}\right) &amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr class=&amp;quot;odd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;t_{2}&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;4.303&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;4.3&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;4.303&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr class=&amp;quot;even&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;t_{5}&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;2.571&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;2.57&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;2.571&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr class=&amp;quot;odd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;t_{40}&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;2.021&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;2.02&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;2.021&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr class=&amp;quot;even&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;t_{100}&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;1.984&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;1.98&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;1.984&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr class=&amp;quot;odd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;N\left( 0,1\right) &amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;1.96&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;1.96&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;1.96&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see the same sort of effects: the value that puts &amp;lt;math&amp;gt;2.5\%&amp;lt;/math&amp;gt; in the upper tail of a &amp;lt;math&amp;gt;t_{\nu }&amp;lt;/math&amp;gt; distribution approaches that for the &amp;lt;math&amp;gt;N\left(0,1\right) &amp;lt;/math&amp;gt; distribution. There is a conventional textbook presumption that for &amp;lt;math&amp;gt;\nu &amp;lt;/math&amp;gt; sufficiently large, one can use the &amp;lt;math&amp;gt;N\left( 0,1\right) &amp;lt;/math&amp;gt; percentage points as good enough practical approximations to those from the &amp;lt;math&amp;gt;t_{\nu }&amp;lt;/math&amp;gt; distribution. The figure &amp;lt;math&amp;gt;\nu =40&amp;lt;/math&amp;gt; is often mentioned for this purpose.&lt;br /&gt;
&lt;br /&gt;
== Confidence intervals using the &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; distribution ==&lt;br /&gt;
&lt;br /&gt;
Suppose that a &amp;lt;math&amp;gt;100\left( 1-\alpha \right) \%&amp;lt;/math&amp;gt; interval estimator or confidence interval is wanted for the mean of a normal distribution, &amp;lt;math&amp;gt;N\left( \mu ,\sigma ^{2}\right) &amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; is unknown. A random sample of size &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; will be drawn from this distribution. In the case that &amp;lt;math&amp;gt;\sigma^2&amp;lt;/math&amp;gt; was known, the facts that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\bar{X} &amp;amp;\thicksim &amp;amp;N\left( \mu ,\dfrac{\sigma ^{2}}{n}\right) , \\&lt;br /&gt;
Z &amp;amp;=&amp;amp;\dfrac{\bar{X}-\mu }{\sqrt{\dfrac{\sigma ^{2}}{n}}}\thicksim N\left(0,1\right)\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
were used to derive the interval estimator from the probability statement&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( -z_{\alpha /2}\leqslant Z\leqslant z_{\alpha /2}\right) =1-\alpha,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( Z&amp;gt;z_{\alpha /2}\right) =\alpha /2.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We cannot use this argument when &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; is unknown. Instead, &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; is replaced by its estimator &amp;lt;math&amp;gt;S^{2}&amp;lt;/math&amp;gt;, and the random variable&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;T=\dfrac{\bar{X}-\mu }{\sqrt{\dfrac{S^{2}}{n}}}\thicksim t_{n-1}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
used rather than &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt;. The replacement probability statement is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( -t_{n-1,\alpha /2}\leqslant T\leqslant t_{n-1,\alpha /2}\right)=1-\alpha ,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
in the form&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( -t_{n-1,\alpha /2}\leqslant \dfrac{\bar{X}-\mu }{S/\sqrt{n}}\leqslant t_{n-1,\alpha /2}\right) =1-\alpha .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As previously we cab again rearrange this to generate the probability statement which defines the endpoints of the interval estimator as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( \bar{X}-t_{n-1,\alpha /2}\sqrt{\dfrac{S^{2}}{n}}\leqslant \mu \leqslant \bar{X}+t_{n-1,\alpha /2}\sqrt{\dfrac{S^{2}}{n}}\right) =1-\alpha .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That is, the interval estimator or confidence interval is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[ C_{L},C_{U}\right] &amp;amp;=&amp;amp;\left[ \bar{X}-t_{n-1,\alpha /2}\sqrt{\dfrac{S^{2}}{n}},\bar{X}+t_{n-1,\alpha /2}\sqrt{\dfrac{S^{2}}{n}}\right] \\&lt;br /&gt;
&amp;amp;=&amp;amp;\bar{X}\pm t_{n-1,\alpha /2}\sqrt{\dfrac{S^{2}}{n}}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The sample value of this confidence interval (“the” confidence interval) is then&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\bar{x}\pm t_{n-1,\alpha /2}\sqrt{\dfrac{s^{2}}{n}}:&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
notice the use of the sample value &amp;lt;math&amp;gt;s^{2}&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;S^{2}&amp;lt;/math&amp;gt; in this expression. This can be usefully compared with the corresponding confidence interval for the case where &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; is known:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\bar{x}\pm z_{\alpha /2}\sqrt{\dfrac{\sigma ^{2}}{n}.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Two things are different in the &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; based confidence interval: the use of &amp;lt;math&amp;gt;t_{n-1,\alpha /2}&amp;lt;/math&amp;gt; rather than &amp;lt;math&amp;gt;z_{\alpha /2}&amp;lt;/math&amp;gt;, and the use of &amp;lt;math&amp;gt;s^{2}&amp;lt;/math&amp;gt; rather than &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
&lt;br /&gt;
This is the same as the previous example, but now assuming that the population variance &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; is unknown. Household income in £’000 is &amp;lt;math&amp;gt;X\thicksim N\left( \mu ,\sigma ^{2}\right) &amp;lt;/math&amp;gt;, where &amp;#039;&amp;#039;&amp;#039;both&amp;#039;&amp;#039;&amp;#039; &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; are unknown. A random sample of size &amp;lt;math&amp;gt;n=5&amp;lt;/math&amp;gt; (previously 50) yields &amp;lt;math&amp;gt;\bar{x}=18&amp;lt;/math&amp;gt; (as before) and &amp;lt;math&amp;gt;s^{2}=4.5&amp;lt;/math&amp;gt;. Here,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;T=\dfrac{\bar{X}-\mu }{\sqrt{\dfrac{S^{2}}{n}}}\thicksim t_{4}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 95% confidence interval for &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt;, we need the percentage point &amp;lt;math&amp;gt;t_{4,0.025}&amp;lt;/math&amp;gt; such that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( T\leqslant t_{4,0.025}\right) =0.975.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From the the t-distribution [[media:TTable.pdf|table]] this is found to be&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;t_{4,0.025}=2.776.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The confidence interval for &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; is then&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\bar{x}\pm t_{n-1,\alpha /2}\sqrt{\dfrac{s^{2}}{n}} &amp;amp;=&amp;amp;18\pm \left(2.776\right) \sqrt{\dfrac{4.5}{5}} \\&lt;br /&gt;
&amp;amp;=&amp;amp;18\pm 2.634 \\&lt;br /&gt;
&amp;amp;=&amp;amp;\left[ 15.366,20.634\right] .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For comparison with the original example, if we had used &amp;lt;math&amp;gt;\sigma ^{2}=5&amp;lt;/math&amp;gt; with a sample of size &amp;lt;math&amp;gt;5&amp;lt;/math&amp;gt;, the resulting normal-based confidence interval would be&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\bar{x}\pm z_{\alpha /2}\sqrt{\dfrac{\sigma ^{2}}{n}} &amp;amp;=&amp;amp;18\pm \left(1.96\right) \sqrt{\dfrac{5}{5}} \\&lt;br /&gt;
&amp;amp;=&amp;amp;18\pm 1.96 \\&lt;br /&gt;
&amp;amp;=&amp;amp;\left[ 16.04,19.96\right] .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This normal-based confidence interval is narrower than the &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; based one: this is the consequence of the extra dispersion of the &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; distribution compared with the &amp;lt;math&amp;gt;N\left( 0,1\right) &amp;lt;/math&amp;gt; distribution. The underlying reason for the increased dispersion is, of course, the fact that we do not know the value of &amp;lt;math&amp;gt;\sigma^2&amp;lt;/math&amp;gt;. Although we have a sampel estimate &amp;lt;math&amp;gt;s^2&amp;lt;/math&amp;gt; we need to acknowledge that there is sampling variation with respect to this estimate on top of the sampling variation we have for &amp;lt;math&amp;gt;\bar{x}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Additional Resources ===&lt;br /&gt;
&lt;br /&gt;
* Salman Khan on such a confidence interval [https://www.khanacademy.org/math/probability/statistics-inferential/confidence-intervals/v/small-sample-size-confidence-intervals].&lt;br /&gt;
&lt;br /&gt;
= Realationships between Normal, &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\chi ^{2}&amp;lt;/math&amp;gt; distributions =&lt;br /&gt;
&lt;br /&gt;
As this point we offer a digression an report some well known properties that link normal, Student &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\chi ^{2}&amp;lt;/math&amp;gt; distributions. Some of the following results have been discussed above, but all are included for completeness:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Let &amp;lt;math&amp;gt;X\sim N(\mu ,\sigma ^{2});&amp;lt;/math&amp;gt; i.e., a normally distributed random variable with mean &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; and variance &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt;. Then &amp;lt;math&amp;gt;Z=\left( X-\mu \right) /\sigma \sim N(0,1)&amp;lt;/math&amp;gt;, standard normal, and &amp;lt;math&amp;gt;W=Z^{2}\sim \chi_{1}^{2}&amp;lt;/math&amp;gt;, chi-squared with &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; degree of freedom. Generally, &amp;lt;math&amp;gt;\chi _{v}^{2}&amp;lt;/math&amp;gt; denotes a chi-squared distribution with &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; degrees of freedom.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Let &amp;lt;math&amp;gt;X_{i}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;i=1,\ldots ,n&amp;lt;/math&amp;gt;, be &amp;#039;&amp;#039;iid &amp;#039;&amp;#039;(independently and identically distributed) &amp;lt;math&amp;gt;N\left( \mu ,\sigma ^{2}\right) &amp;lt;/math&amp;gt; variates, then&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\sum_{i=1}^{n}Z_{i}^{2}\sim \chi _{n}^{2},&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;where &amp;lt;math&amp;gt;Z_{i}=\left( X_{i}-\mu \right) /\sigma &amp;lt;/math&amp;gt;, and&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\frac{1}{\sigma ^{2}}\sum_{i=1}^{n}\left( X_{i}-\bar{X}\right) ^{2}\sim \chi_{n-1}^{2},&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;where &amp;lt;math&amp;gt;\bar{X}=\frac{1}{n}\sum_{i=1}^{n}X_{i}&amp;lt;/math&amp;gt;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Furthermore,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\sqrt{n}\left( \bar{X}-\mu \right) /\sigma \sim N(0,1)&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;and&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\sqrt{n}\left( \bar{X}-\mu \right) /s\sim t_{n-1},\quad \quad \text{\emph{Student-t distribution }with }n-1\text{ degrees of freedom,}&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;where &amp;lt;math&amp;gt;s^{2}=\frac{1}{n-1}\sum_{i=1}^{n}(X_{i}-\bar{X})^{2}&amp;lt;/math&amp;gt; is distributed &amp;#039;&amp;#039;independently &amp;#039;&amp;#039;of &amp;lt;math&amp;gt;\bar{X}&amp;lt;/math&amp;gt;.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Let &amp;lt;math&amp;gt;Z\sim N(0,1)&amp;lt;/math&amp;gt; independently of &amp;lt;math&amp;gt;Y\sim \chi _{v}^{2}&amp;lt;/math&amp;gt;. Then, &amp;lt;math&amp;gt;S=\frac{Z}{\sqrt{Y/v}}\sim t_{v}&amp;lt;/math&amp;gt;.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Let &amp;lt;math&amp;gt;W\sim \chi _{m}^{2}&amp;lt;/math&amp;gt; independently of &amp;lt;math&amp;gt;V\sim \chi _{p}^{2}&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;U=W+V\sim \chi _{m+p}^{2}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;R=\frac{W/m}{V/p}\sim F_{m,p};&amp;lt;/math&amp;gt; i.e., &amp;lt;math&amp;gt;R&amp;lt;/math&amp;gt; has an &amp;#039;&amp;#039;F-distribution&amp;#039;&amp;#039; with &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; degrees of freedom. Hence,&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;R^{-1}\sim F_{p,m};&amp;lt;/math&amp;gt; and,&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;using previous results, if &amp;lt;math&amp;gt;S\sim t_{q}&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;S^{2}\sim F_{1,q}&amp;lt;/math&amp;gt;.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Large Sample Confidence Intervals =&lt;br /&gt;
&lt;br /&gt;
The discussion so far has been based on the idea that we are sampling from a normal distribution, &amp;lt;math&amp;gt;N\left( \mu ,\sigma ^{2}\right) &amp;lt;/math&amp;gt;, in which both &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; may have to be estimated. However, in practice there will only be few occasions on which we can be certain that the type of distribution is Normal. Therefore this assumptions of normality need not be true: what can be done if the &amp;#039;&amp;#039;&amp;#039;assumption of normality&amp;#039;&amp;#039;&amp;#039; is false?&lt;br /&gt;
&lt;br /&gt;
== Impact of Central Limit Theorem ==&lt;br /&gt;
&lt;br /&gt;
The usual sampling distribution of the sample mean &amp;lt;math&amp;gt;\bar{X}&amp;lt;/math&amp;gt; also assumes sampling from a normal distribution. In an earlier [[Statistics_SamplingDistributions#Sampling_from_Non-Normal_distributions| Section]], the effect of sampling from a non-normal distribution was discussed. Provided that one draws a random sample from a population with mean &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; and variance &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt;, the Central Limit Theorem assures us that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\dfrac{\bar{X}-\mu }{\sigma /\sqrt{n}}\thicksim N\left( 0,1\right) \;\;\;\;\;\text{approximately,}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
or, equivalently,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\bar{X}\thicksim N\left( \mu ,\dfrac{\sigma ^{2}}{n}\right) \;\;\;\;\;\text{approximately.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is the presumption that the quality of the approximation improves as &amp;lt;math&amp;gt;n\rightarrow \infty &amp;lt;/math&amp;gt;, that is, as the sample size increases. The larger the &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;, the better.&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; is known, then we are &amp;#039;&amp;#039;&amp;#039;approximately&amp;#039;&amp;#039;&amp;#039; back in the context of Section [ci]. That is, we proceed as if &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; is known, and simply qualify the confidence level as an &amp;#039;&amp;#039;&amp;#039;approximate&amp;#039;&amp;#039;&amp;#039; confidence level.&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; is unknown, we can use &amp;lt;math&amp;gt;S^{2}&amp;lt;/math&amp;gt; to estimate &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt;. However, unless we sample from a normal distribution, it will &amp;#039;&amp;#039;&amp;#039;not&amp;#039;&amp;#039;&amp;#039; be true that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;T=\dfrac{\bar{X}-\mu }{\sqrt{\dfrac{S^{2}}{n}}}\thicksim t_{n-1}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Rather,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;T=\dfrac{\bar{X}-\mu }{\sqrt{\dfrac{S^{2}}{n}}}\thicksim N\left( 0,1\right) \;\;\;\;\;\text{\textbf{approximately}.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That is, replacing &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; by an estimator still allows the &amp;#039;&amp;#039;&amp;#039;large sample normal approximation&amp;#039;&amp;#039;&amp;#039; to hold. Only an intuitive justification for this can be given here. This is simply that as &amp;lt;math&amp;gt;n\rightarrow \infty &amp;lt;/math&amp;gt;,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;S^{2}\rightarrow \sigma ^{2}:&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;S^{2}&amp;lt;/math&amp;gt; gets so close to &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; that its influence on the distribution of &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; disappears.&lt;br /&gt;
&lt;br /&gt;
== The large sample confidence interval for &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
From the initial probability statement&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr \left( -z_{\alpha /2}\leqslant \dfrac{\bar{X}-\mu }{\sqrt{\dfrac{S^{2}}{n}}}\leqslant z_{\alpha /2}\right) =1-\alpha \;\;\;\;\;\text{approximately,}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
we derive, as previously, the interval estimator or confidence interval&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\left[ C_{L},C_{U}\right] =\bar{X}\pm z_{\alpha /2}\sqrt{\dfrac{S^{2}}{n}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
having &amp;#039;&amp;#039;&amp;#039;approximate&amp;#039;&amp;#039;&amp;#039; confidence level &amp;lt;math&amp;gt;100\left( 1-\alpha \right) \%&amp;lt;/math&amp;gt;. The sample value of this confidence interval is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\bar{x}\pm z_{\alpha /2}\sqrt{\dfrac{s^{2}}{n}}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Consider the earlier example (using &amp;lt;math&amp;gt;n=50&amp;lt;/math&amp;gt;), but now not assuming that sampling takes place from a normal distribution. We assume that the sample information is &amp;lt;math&amp;gt;n=50&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\bar{x}=18&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;s^{2}=4.5&amp;lt;/math&amp;gt;. Then, an approximate &amp;lt;math&amp;gt;95\%&amp;lt;/math&amp;gt; confidence interval is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\bar{x}\pm z_{\alpha /2}\sqrt{\dfrac{s^{2}}{n}} &amp;amp;=&amp;amp;18\pm 1.96\sqrt{\dfrac{4.5}{50}} \\&lt;br /&gt;
&amp;amp;=&amp;amp;18\pm 0.588 \\&lt;br /&gt;
&amp;amp;=&amp;amp;\left[ 17.412,18.588\right] .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, how this compares with the exact confidence interval based on knowledge of &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt; depends on how good &amp;lt;math&amp;gt;s^{2}&amp;lt;/math&amp;gt; is as an estimate of &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt;. Nothing can be said about this usually.&lt;br /&gt;
&lt;br /&gt;
== Confidence intervals for population proportions ==&lt;br /&gt;
&lt;br /&gt;
In the Section on [[Point_Estimation#Estimating_the_population_proportion|estimating a population proportion]] &amp;lt;math&amp;gt;\pi &amp;lt;/math&amp;gt; we assumed that a random sample is obtained from the distribution of a Bernoulli random variable, a random variable &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; taking on values &amp;lt;math&amp;gt;0&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; with probabilities &amp;lt;math&amp;gt;1-\pi &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\pi &amp;lt;/math&amp;gt; respectively. The sample mean &amp;lt;math&amp;gt;\bar{X}&amp;lt;/math&amp;gt; here is the random variable representing the sample proportion of &amp;lt;math&amp;gt;1^{\prime }s&amp;lt;/math&amp;gt;, and so is usually denoted &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt;, “the” sample proportion. It was shown in that [[Point_Estimation#Estimating_the_population_proportion|Section]] that the sampling distribution of &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; is related to a Binomial distribution, and that the Central Limit Theorem can be used to provide an approximate normal sampling distribution.&lt;br /&gt;
&lt;br /&gt;
Since&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;E\left[ P\right] =\pi ,\;\;\;\;var\left[ P\right] =\dfrac{\pi \left( 1-\pi \right) }{n},&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\dfrac{P-\pi }{\sqrt{var\left[ P\right] }}\thicksim N\left(0,1\right) \;\;\;\;\;\text{approximately.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, one could hope to use this to provide an approximate confidence interval for &amp;lt;math&amp;gt;\pi &amp;lt;/math&amp;gt;. There is a minor complication here in that &amp;lt;math&amp;gt;var\left[ P\right] &amp;lt;/math&amp;gt; depends on the unknown parameter &amp;lt;math&amp;gt;\pi &amp;lt;/math&amp;gt;, but, there is an obvious estimator (&amp;lt;math&amp;gt;P)&amp;lt;/math&amp;gt; available which could used to provide an estimator of &amp;lt;math&amp;gt;var\left[ P\right] &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Following the previous reasoning, we argue that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\dfrac{P-\pi }{\sqrt{\dfrac{P\left( 1-P\right) }{n}}}\thicksim N\left(0,1\right) \;\;\;\;\;\text{approximately.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By analogy with the case of the population mean, an approximate &amp;lt;math&amp;gt;100\left(1-\alpha \right) \%&amp;lt;/math&amp;gt; confidence interval for &amp;lt;math&amp;gt;\pi &amp;lt;/math&amp;gt; is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\left[ C_{L},C_{U}\right] =P\pm z_{\alpha /2}\sqrt{\dfrac{P\left( 1-P\right)}{n}},&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
with sample value&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p\pm z_{\alpha /2}\sqrt{\dfrac{p\left( 1-p\right) }{n}}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
&lt;br /&gt;
A random sample of 300 households is obtained, with 28% of the sample owning a tablet computer. An approximate &amp;lt;math&amp;gt;95\%&amp;lt;/math&amp;gt; confidence interval for the population proportion of households owning a tablet computer is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
p\pm z_{\alpha /2}\sqrt{\dfrac{p\left( 1-p\right) }{n}} &amp;amp;=&amp;amp;0.28\pm \left(1.96\right) \sqrt{\dfrac{0.28\left( 1-0.28\right) }{300}} \\&lt;br /&gt;
&amp;amp;=&amp;amp;0.28\pm 0.0508 \\&lt;br /&gt;
&amp;amp;=&amp;amp;\left[ 0.229,0.331\right] .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For such an apparently large sample size, this is quite a wide confidence interval. Better precision of estimation would require a larger sample size.&lt;br /&gt;
&lt;br /&gt;
=== Additional resources ===&lt;br /&gt;
&lt;br /&gt;
* This is the Khan Academy example of a confidence interval for a proportion [https://www.khanacademy.org/math/probability/statistics-inferential/confidence-intervals/v/confidence-interval-example].&lt;br /&gt;
&lt;br /&gt;
= Footnotes =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Point_Estimation_Exercises&amp;diff=4255</id>
		<title>Point Estimation Exercises</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Point_Estimation_Exercises&amp;diff=4255"/>
				<updated>2019-09-16T13:35:41Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /* Exercises */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
= Exercises =&lt;br /&gt;
&lt;br /&gt;
Worked solutions to these questions can be found here: [http://youtu.be/TCHG3mP3q1g?hd=1 Q1], [http://youtu.be/iXwkvtZpjm8?hd=1 Q2], [http://youtu.be/mQymVsrxmPU?hd=1 Q3], [http://youtu.be/x4lzkCx3bNw?hd=1 Q4], [http://youtu.be/PqZko0Yi8Fc?hd=1 Q5] and [http://youtu.be/8-uQdtCZdlA?hd=1 Q6]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;[L1,L2]&amp;lt;/math&amp;gt;Suppose that &amp;lt;math&amp;gt;Y\sim N\left( 6,2\right) &amp;lt;/math&amp;gt;, and that &amp;lt;math&amp;gt;\bar{Y}&amp;lt;/math&amp;gt; is the sample mean of a (simple) random sample of size &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;. Find:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\Pr \left( Y&amp;gt;8\right)&amp;lt;/math&amp;gt;; {0.0793}&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\Pr \left( \bar{Y}&amp;gt;8\right) \;&amp;lt;/math&amp;gt;when &amp;lt;math&amp;gt;n=1;&amp;lt;/math&amp;gt; {0.0793}&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\Pr \left( \bar{Y}&amp;gt;8\right) \;&amp;lt;/math&amp;gt;when &amp;lt;math&amp;gt;n=2;&amp;lt;/math&amp;gt; {0.0228}&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\Pr \left( \bar{Y}&amp;gt;8\right) \;&amp;lt;/math&amp;gt;when &amp;lt;math&amp;gt;n=5;&amp;lt;/math&amp;gt; {0.0000}&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Sketch, on the same axes, the sampling distribution of &amp;lt;math&amp;gt;\bar{Y}&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;n=1,2,5&amp;lt;/math&amp;gt;.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;[L1,L2]&amp;lt;/math&amp;gt; In a certain population, 60% of all adults own a car. If a simple random sample of 100 adults is taken, what is the probability that at least 70% of the sample will be car owners? (Optional: use EXCEL to find the exact probability.) {0.0207 and 0.0262 are both approximations}&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;[L1,L2]&amp;lt;/math&amp;gt;When set correctly, a machine produces hamburgers of mean weight &amp;lt;math&amp;gt;100g&amp;lt;/math&amp;gt; each and standard deviation &amp;lt;math&amp;gt;5g&amp;lt;/math&amp;gt; each. The weight of hamburgers is known to be normally distributed. The hamburgers are sold in packets of four.&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;What is the sampling distribution of the total weight of hamburgers in a packet? In stating this sampling distribution, state carefully what results you using and any assumptions you have to make. {N(400,400), independence}&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;A customer claims that packets of hamburgers are underweight. A trading standards officer is sent to investigate. He selects one packet of four hamburgers and finds that the weight of hamburgers in it is &amp;lt;math&amp;gt;390g&amp;lt;/math&amp;gt;. What is the probability of a packet weighing as little as &amp;lt;math&amp;gt;390g&amp;lt;/math&amp;gt; if the machine is set correctly? Do you consider that this finding constitutes evidence that the machine has been set to deliver underweight hamburgers? {0.3085}&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;A discrete random variable, &amp;lt;math&amp;gt;Y&amp;lt;/math&amp;gt;, has the following probability distribution:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr class=&amp;quot;header&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;2&amp;lt;/math&amp;gt;&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr class=&amp;quot;odd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;p\left( y\right) &amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.3&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.4&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td align=&amp;quot;center&amp;quot;&amp;gt;&amp;lt;math&amp;gt;0.3&amp;lt;/math&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;[L2]&amp;lt;/math&amp;gt; What are &amp;lt;math&amp;gt;E\left[ Y\right] &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y_{\min }&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;y_{\min }&amp;lt;/math&amp;gt; is the smallest possible value of &amp;lt;math&amp;gt;Y?&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;[L1,L2]&amp;lt;/math&amp;gt; Simple random samples of two observations are to be drawn with replacement from this population. Write down all possible samples, and the probability of each sample. {e.g. (&amp;lt;math&amp;gt;P(y_1=0, y_2=2)=0.09&amp;lt;/math&amp;gt;} Use this to obtain the sampling distribution of each of the following statistics:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;the sample mean, &amp;lt;math&amp;gt;\bar{Y};&amp;lt;/math&amp;gt; {e.g. &amp;lt;math&amp;gt;P(\bar{Y}=0.5)=0.24)&amp;lt;/math&amp;gt;}&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;the minimum of the two observations, &amp;lt;math&amp;gt;M&amp;lt;/math&amp;gt;. {&amp;lt;math&amp;gt;P(M=1)=0.4&amp;lt;/math&amp;gt;}&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;[L2]&amp;lt;/math&amp;gt; Calculate &amp;lt;math&amp;gt;E\left[ \bar{Y}\right] &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;E\left[ M\right] &amp;lt;/math&amp;gt;. State whether each is an unbiased estimator of the corresponding population parameter. {&amp;lt;math&amp;gt;\bar{Y}&amp;lt;/math&amp;gt; yes, &amp;lt;math&amp;gt;M&amp;lt;/math&amp;gt; no}&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;A random sample of size three is drawn from the distribution of a Bernoulli random variable &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt;, where&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\Pr \left( X=0\right) =0.3,\;\;\;\Pr \left( X=1\right) =0.7.&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;[L1,L2]&amp;lt;/math&amp;gt; Enumerate all the possible samples, and find their probabilities of being drawn. You should have eight possible samples. {e.g. &amp;lt;math&amp;gt;P(1,0,1)=0.3*0.7^2)&amp;lt;/math&amp;gt;}&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;[L1,L2]&amp;lt;/math&amp;gt; Find the sampling distribution of the random variable &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;, the total number of ones in each sample. {e.g. &amp;lt;math&amp;gt;P(T=1)=0.189)&amp;lt;/math&amp;gt;}&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;[L2]&amp;lt;/math&amp;gt; Check that the probability distribution of &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; is the Binomial distribution for &amp;lt;math&amp;gt;n=3&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\pi =0.7&amp;lt;/math&amp;gt;, by calculating&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\Pr \left( T=t\right) =\binom{3}{t}\left( 0.7\right) ^{t}\left( 0.3\right)^{3-t}&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;for &amp;lt;math&amp;gt;t=0,1,2,3&amp;lt;/math&amp;gt;.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;[L1,L2]&amp;lt;/math&amp;gt; Find the probability distribution of &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt;, the sample proportion of ones. How is this probability distribution related to that of &amp;lt;math&amp;gt;T?&amp;lt;/math&amp;gt; {e.g. &amp;lt;math&amp;gt;Pr(P=2/3)=0.441)&amp;lt;/math&amp;gt;}&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;[L2]&amp;lt;/math&amp;gt; Is &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; an unbiased estimator of &amp;lt;math&amp;gt;\Pr \left( X=1\right) ?&amp;lt;/math&amp;gt; {yes}&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;[L2]&amp;lt;/math&amp;gt; A simple random sample of three observations is taken from a population with mean &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt; and variance &amp;lt;math&amp;gt;\sigma ^{2}&amp;lt;/math&amp;gt;. The three sample random variables are denoted &amp;lt;math&amp;gt;Y_{1},Y_{2},Y_{3}&amp;lt;/math&amp;gt;. A sample statistic is being sought to estimate &amp;lt;math&amp;gt;\mu &amp;lt;/math&amp;gt;. The statistics being considered are&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;A_{1}=\dfrac{1}{3}\left( Y_{1}+Y_{2}+Y_{3}\right) ;&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;A_{2}=\dfrac{1}{2}\left( Y_{1}+Y_{2}\right) ;&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;A_{3}=\dfrac{1}{2}\left( Y_{1}+Y_{2}+Y_{3}\right) ;&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;A_{4}=0.75Y_{1}+0.75Y_{2}-0.5Y_{3}&amp;lt;/math&amp;gt;.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Which of these statistics yields an unbiased estimator of &amp;lt;math&amp;gt;\mu ?&amp;lt;/math&amp;gt; {&amp;lt;math&amp;gt;A_1,A_2&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_4&amp;lt;/math&amp;gt;}&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Of those that are unbiased, which is the most efficient? {&amp;lt;math&amp;gt;Var(A_1)=Var(Y)/3&amp;lt;/math&amp;gt;}&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Of those that are unbiased, find the efficiency with respect to &amp;lt;math&amp;gt;A_{1}. &amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Footnotes =&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Panel_in_R&amp;diff=4254</id>
		<title>Panel in R</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Panel_in_R&amp;diff=4254"/>
				<updated>2019-03-03T22:13:33Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /* Set-up of Panel */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In this section we shall discuss how to deal with panel data and how to use econometric techniques that exploit the additional analysis that can be performed due to the Panel character of data.&lt;br /&gt;
&lt;br /&gt;
A lot of this material repeats material that is discussed in this YouTube clip [https://www.youtube.com/watch?v=1pST2lUx6QM]. &lt;br /&gt;
&lt;br /&gt;
== The plm package ==&lt;br /&gt;
&lt;br /&gt;
To deal efficiently with panel data we will need the &amp;lt;source enclose=none&amp;gt;plm&amp;lt;/source&amp;gt; package and you need to downlaod it (&amp;lt;source enclose=none&amp;gt;install.packages(&amp;quot;plm&amp;quot;)&amp;lt;/source&amp;gt;) and load it into the workspace (&amp;lt;source enclose=none&amp;gt;library(plm)&amp;lt;/source&amp;gt;) in the usual manner. &lt;br /&gt;
&lt;br /&gt;
Details for this package can be found [http://cran.r-project.org/web/packages/plm/vignettes/plm.pdf here].&lt;br /&gt;
&lt;br /&gt;
== Example Data ==&lt;br /&gt;
&lt;br /&gt;
Here we are using the [[R#Data_Sets|Crime Statistics]] dataset for illustration. It is used in Example 13.9 in Wooldridge&amp;#039;s Introductory Econometrics. The dependent variable we will be looking at here is the crime rate (&amp;lt;source enclose = none&amp;gt;crmrte&amp;lt;/source&amp;gt; and we will use a range of explanatory variables that describe features of the local enforcement setting, like the probability of arrest (&amp;lt;source enclose = none&amp;gt;prbarr&amp;lt;/source&amp;gt;), probability of conviction (&amp;lt;source enclose = none&amp;gt;prbconv&amp;lt;/source&amp;gt;), probability of prison if convicted (&amp;lt;source enclose = none&amp;gt;prbpris&amp;lt;/source&amp;gt;), average sentence length (&amp;lt;source enclose = none&amp;gt;avgsen&amp;lt;/source&amp;gt;) and the number of police officers per person (&amp;lt;source enclose = none&amp;gt;polpc&amp;lt;/source&amp;gt;). The data are for 90 counties in North Carolina and for each we have observations for the years 1081 to 1987.&lt;br /&gt;
&lt;br /&gt;
These type of models are usually estimated in log-log form to obtain elasticities and the for this reason the dataset already includes the logged variables (the above names preceded with the letter &amp;lt;source enclose = none&amp;gt;&amp;quot;l&amp;quot;&amp;lt;/source&amp;gt;, e.g. &amp;lt;source enclose = none&amp;gt;lcrmrte&amp;lt;/source&amp;gt;. Further, and in anticipation of time-differenced series often being used, the data-set also includes the differenced log variables. These are the variables preceded with the letter &amp;lt;source enclose = none&amp;gt;&amp;quot;c&amp;quot;&amp;lt;/source&amp;gt;, e.g. &amp;lt;source enclose = none&amp;gt;clcrmrte&amp;lt;/source&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Often original data-files do only come with the original variables and without the logged and differenced variables and as it turns out you will often not need these extra variables.&lt;br /&gt;
&lt;br /&gt;
When using panel data we will often want to allow for time period effects and for this reason the dataset also includes time dummy variables, e.g. &amp;lt;source enclose = none&amp;gt; d83&amp;lt;/source&amp;gt;, which takes a value of 1 for observations from 1983 and 0 otherwise.&lt;br /&gt;
&lt;br /&gt;
== Set-up of Panel ==&lt;br /&gt;
&lt;br /&gt;
Here is our initial data load-up&lt;br /&gt;
&lt;br /&gt;
     library(plm)&lt;br /&gt;
     setwd(&amp;quot;X:/ECLR/R/PanelData&amp;quot;)              # This sets the working directory&lt;br /&gt;
     # Opens crime4.csv from working directory&lt;br /&gt;
     # converts variables with &amp;quot;.&amp;quot; entries to num with NA instead of &amp;quot;.&amp;quot;&lt;br /&gt;
     mydata &amp;lt;- read.csv(&amp;quot;crime4.csv&amp;quot;,na.strings = &amp;quot;.&amp;quot;) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So far we have merely uploaded the csv file. It is worth having a look at the data at this stage&lt;br /&gt;
&lt;br /&gt;
[[File:Panel_bic1.jpg|frameless|500px]]&lt;br /&gt;
&lt;br /&gt;
The distinctive Panel feature of this dataset is that we have several periods of observations (year) for each county. At this stage R does not yet know that these are Panel Data and now we need to let it know about this feature. This is what the following function does:&lt;br /&gt;
&lt;br /&gt;
     pdata &amp;lt;- pdata.frame(mydata, index = c(&amp;quot;county&amp;quot;,&amp;quot;year&amp;quot;)) # defines the panel dimensions&lt;br /&gt;
&lt;br /&gt;
The first input into the &amp;lt;source enclose=none&amp;gt;pdata.frame&amp;lt;/source&amp;gt; function is the original data frame (here &amp;lt;source enclose = none&amp;gt;mydata&amp;lt;/source&amp;gt;). The second input specifies which variable indexes the individual (here &amp;lt;source enclose = none&amp;gt;&amp;quot;county&amp;quot;&amp;lt;/source&amp;gt;.) and the variable which indexes the time (here &amp;lt;source enclose = none&amp;gt;&amp;quot;year&amp;quot;&amp;lt;/source&amp;gt;.). Both these are collected in a list and handed over to the function as &amp;lt;source enclose = none&amp;gt;index = c(&amp;quot;county&amp;quot;,&amp;quot;year&amp;quot;)&amp;lt;/source&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Estimation Methods ==&lt;br /&gt;
&lt;br /&gt;
This is not the right place to discuss the merits of the range of different estimation methods that exist for Panel Data sets. Let&amp;#039;s assume we want to explain variation in the dependent variable &amp;lt;source enclose = none&amp;gt;lcrmrte&amp;lt;/source&amp;gt; as a function of logs of all the other variables listed above (&amp;lt;source enclose = none&amp;gt;lprbarr, lprbconv, lprbpris, lavgsen and lpolpc&amp;lt;/source&amp;gt;) and time dummy variables.&lt;br /&gt;
&lt;br /&gt;
A range of estimation methods exist to make use of the panel character of data. Here I will only introduce pooled estimation and first difference estimation. If you want to use any of the other available methods you should consult the [http://cran.r-project.org/web/packages/plm/vignettes/plm.pdf documentation] of the plm package.&lt;br /&gt;
&lt;br /&gt;
=== Pooled OLS ===&lt;br /&gt;
&lt;br /&gt;
This is the most straightforward way to estimate a model. Essentially we are just chucking all the observations into one big pot and apply a straightforward OLS estimation. The way to do this is as follows:&lt;br /&gt;
&lt;br /&gt;
     pooling &amp;lt;- plm(formula = lcrmrte ~ d83 + d84 + d85 + d86 + d87 &lt;br /&gt;
               + prbarr + prbconv + prbpris + avgsen + polpc, &lt;br /&gt;
               data = pdata, model = &amp;quot;pooling&amp;quot;)&lt;br /&gt;
     print(summary(pooling))&lt;br /&gt;
&lt;br /&gt;
As you can see, calling a panel data estimation method using the &amp;lt;source enclose = none&amp;gt;plm&amp;lt;/source&amp;gt; function is not unlike calling a normal OLS regression using the &amp;lt;source enclose = none&amp;gt;lm&amp;lt;/source&amp;gt; function. The first input is the model representation (the dependent variable followed by all explanatory variables) and the second is the dataframe which is being used, and importantly here we are using the panel data version we defined previously &amp;lt;source enclose = none&amp;gt;pdata&amp;lt;/source&amp;gt;. A difference is that here we need a third input which specifies how we estimate the Panel Data model. If we want to pool all observation then we call &amp;lt;source enclose = none&amp;gt;model = &amp;quot;pooling&amp;quot;&amp;lt;/source&amp;gt;. &amp;lt;ref&amp;gt;Other available methods are &amp;quot;within&amp;quot;, &amp;quot;between&amp;quot;, &amp;quot;random&amp;quot;, &amp;quot;fd&amp;quot; and &amp;quot;ht&amp;quot;. For details see the plm documentation.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This delivers&lt;br /&gt;
&lt;br /&gt;
     Oneway (individual) effect Pooling Model&lt;br /&gt;
     &lt;br /&gt;
     Call:&lt;br /&gt;
     plm(formula = lcrmrte ~ d82 + d83 + d84 + d85 + d86 + d87 + prbarr + &lt;br /&gt;
         prbconv + prbpris + avgsen + polpc, data = pdata, model = &amp;quot;pooling&amp;quot;)&lt;br /&gt;
     &lt;br /&gt;
     Balanced Panel: n=90, T=7, N=630&lt;br /&gt;
     &lt;br /&gt;
     Residuals :&lt;br /&gt;
        Min. 1st Qu.  Median 3rd Qu.    Max. &lt;br /&gt;
     -2.0000 -0.2840  0.0328  0.3110  1.4800 &lt;br /&gt;
     &lt;br /&gt;
     Coefficients :&lt;br /&gt;
                   Estimate Std. Error  t-value  Pr(&amp;gt;|t|)    &lt;br /&gt;
     (Intercept) -3.3529108  0.1385095 -24.2071 &amp;lt; 2.2e-16 ***&lt;br /&gt;
     d82         -0.0118914  0.0723975  -0.1643 0.8695870    &lt;br /&gt;
     d83         -0.0465997  0.0720681  -0.6466 0.5181272    &lt;br /&gt;
     d84         -0.1524940  0.0725012  -2.1033 0.0358405 *  &lt;br /&gt;
     d85         -0.1180270  0.0728541  -1.6200 0.1057325    &lt;br /&gt;
     d86         -0.0838222  0.0723015  -1.1593 0.2467644    &lt;br /&gt;
     d87          0.0027029  0.0709855   0.0381 0.9696382    &lt;br /&gt;
     prbarr      -1.7873016  0.1163001 -15.3680 &amp;lt; 2.2e-16 ***&lt;br /&gt;
     prbconv     -0.0958937  0.0126226  -7.5970 1.127e-13 ***&lt;br /&gt;
     prbpris      0.8453609  0.2193194   3.8545 0.0001281 ***&lt;br /&gt;
     avgsen      -0.0057214  0.0074539  -0.7676 0.4430340    &lt;br /&gt;
     polpc       56.9623752  8.1233886   7.0121 6.173e-12 ***&lt;br /&gt;
     ---&lt;br /&gt;
     Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1&lt;br /&gt;
     &lt;br /&gt;
     Total Sum of Squares:    206.38&lt;br /&gt;
     Residual Sum of Squares: 137.88&lt;br /&gt;
     R-Squared      :  0.33191 &lt;br /&gt;
     Adj. R-Squared :  0.32559 &lt;br /&gt;
     F-statistic: 27.9113 on 11 and 618 DF, p-value: &amp;lt; 2.22e-16&lt;br /&gt;
&lt;br /&gt;
Here we have included 6 time dummies to allow for different intercepts for the seven different years.&lt;br /&gt;
     &lt;br /&gt;
=== First Difference ===&lt;br /&gt;
&lt;br /&gt;
One issue with simple models like the pooled model is that it is quite likely that there will be unobserved heterogeneity which is, while not explicitly modelled and hence contained in the error term, it to be expected that some of this county to county variation is correlated with some of the explanatory variables and consequently violating the zero-conditional mean assumption.&lt;br /&gt;
&lt;br /&gt;
The perhaps easiest way to deal with this is to estimate a model in (time-)differenced form, as this differencing will eliminate the (time-invariant) elements of this heterogeneity. To estimte the model in differenced form. This is done as follows:&lt;br /&gt;
&lt;br /&gt;
     fd3 &amp;lt;- plm(lcrmrte ~ d82 + d83 + d84 + d85 + d86 + d87 &lt;br /&gt;
          + lprbarr + lprbconv + lprbpris + lavgsen + lpolpc &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;- 1&amp;lt;/span&amp;gt;, &lt;br /&gt;
          data = pdata, &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;model = &amp;quot;fd&amp;quot;&amp;lt;/span&amp;gt;)&lt;br /&gt;
     print(summary(fd3))&lt;br /&gt;
&lt;br /&gt;
Most of this function call is identical to the above call, but for two differences. First, the method now indicates that we need a first difference estimation &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;model = &amp;quot;fd&amp;quot;&amp;lt;/span&amp;gt;. Second, as we are taking first differences we need to estimate the model without a constant, which is why we include the term &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;- 1&amp;lt;/span&amp;gt; into the model specification.&lt;br /&gt;
&lt;br /&gt;
This model basically replicates the model estimated in Wooldridge&amp;#039;s Example 13.9:&lt;br /&gt;
&lt;br /&gt;
     Oneway (individual) effect &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;First-Difference Model&amp;lt;/span&amp;gt;&lt;br /&gt;
     &lt;br /&gt;
     Call:&lt;br /&gt;
     plm(formula = lcrmrte ~ d82 + d83 + d84 + d85 + d86 + d87 + lprbarr + &lt;br /&gt;
         lprbconv + lprbpris + lavgsen + lpolpc - 1, data = pdata, &lt;br /&gt;
         model = &amp;quot;fd&amp;quot;)&lt;br /&gt;
     &lt;br /&gt;
     Balanced Panel: n=90, T=7, N=630&lt;br /&gt;
     &lt;br /&gt;
     Residuals :&lt;br /&gt;
         Min.  1st Qu.   Median  3rd Qu.     Max. &lt;br /&gt;
     -0.65900 -0.07840  0.00296  0.07500  0.68300 &lt;br /&gt;
     &lt;br /&gt;
     Coefficients :&lt;br /&gt;
                Estimate Std. Error  t-value  Pr(&amp;gt;|t|)    &lt;br /&gt;
     d82       0.0077133  0.0170579   0.4522 0.6513202    &lt;br /&gt;
     d83      -0.0844391  0.0234564  -3.5998 0.0003484 ***&lt;br /&gt;
     d84      -0.1246632  0.0287464  -4.3367 1.733e-05 ***&lt;br /&gt;
     d85      -0.1215609  0.0331500  -3.6670 0.0002702 ***&lt;br /&gt;
     d86      -0.0863332  0.0366763  -2.3539 0.0189411 *  &lt;br /&gt;
     d87      -0.0377932  0.0399728  -0.9455 0.3448481    &lt;br /&gt;
     lprbarr  -0.3274943  0.0299801 -10.9237 &amp;lt; 2.2e-16 ***&lt;br /&gt;
     lprbconv -0.2381068  0.0182341 -13.0583 &amp;lt; 2.2e-16 ***&lt;br /&gt;
     lprbpris -0.1650464  0.0259690  -6.3555 4.488e-10 ***&lt;br /&gt;
     lavgsen  -0.0217606  0.0220909  -0.9850 0.3250509    &lt;br /&gt;
     lpolpc    0.3984266  0.0268820  14.8213 &amp;lt; 2.2e-16 ***&lt;br /&gt;
     ---&lt;br /&gt;
     Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1&lt;br /&gt;
     &lt;br /&gt;
     Total Sum of Squares:    22.197&lt;br /&gt;
     Residual Sum of Squares: 12.596&lt;br /&gt;
     R-Squared      :  0.43251 &lt;br /&gt;
     Adj. R-Squared :  0.4237 &lt;br /&gt;
     F-statistic: 36.6529 on 11 and 529 DF, p-value: &amp;lt; 2.22e-16&lt;br /&gt;
&lt;br /&gt;
As you can see we now have lost a constant. When interpreting you should keep in mind that all variables are used in differences. To see this you will have to refer to the note in the title of the regression output (&amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;First-Difference Model&amp;lt;/span&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
The marginal effects of the explanatory variable effects are negative as expected for all but the police number variable&amp;lt;ref&amp;gt;The reason for that is most likely the endogeneity of that variable.&amp;lt;/ref&amp;gt; and are exactly as those reported in the Wooldridge textbook example 13.9. What differs are the estimated values for the dummy variables. Recall that the model reported here uses the time-differences and hence also the time differences of the time dummy variables which are slightly unintuitive. An alternative way to include these dummy variables is to drop one of the dummy variables (so only include 5 here), but use these in their actual levels and add a constant back. That is what is done in Wooldridge, but it only changes the estimated values for the dummy variable parameters, leaving the model fit and the estimated coefficients for all other explanatory variables unchanged.&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
&lt;br /&gt;
* Wooldridge, J.M. (2015) Introductory Econometrics, 6th edition&lt;br /&gt;
* Angrist and Pischke, Mostly Harmless Econometrics&lt;br /&gt;
* The documentation to the plm package [http://cran.r-project.org/web/packages/plm/vignettes/plm.pdf found here]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4253</id>
		<title>R GARCH</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4253"/>
				<updated>2018-05-04T00:41:32Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
When you are dealing with financial time-series we often have relatively high frequency observations available. It is very common for instance to have daily observations available. In fact it is now possible to obtain hourly, minute, second or even millisecond observations. But here we will restrict ourselves to daily observations. For some assets these will be 7 days a week observations, but for others these will be work-day observations, so typically 5 days a week of observations.&lt;br /&gt;
&lt;br /&gt;
A video walk-through is available from https://youtu.be/8VXmRl5gzEU&lt;br /&gt;
&lt;br /&gt;
= Packages used =&lt;br /&gt;
&lt;br /&gt;
There are a number of packages that can enable us to estimate volatility models. The packages we will use are the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; for univariate GARCH models and the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; (for multivariate models) package both written by Alexios Ghalanos. We shall also use the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package as it will give us some easy access to some standard financial data.&lt;br /&gt;
&lt;br /&gt;
So please ensure that you install these packes and then load them,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;#install.packages(c(&amp;amp;quot;quantmod&amp;amp;quot;,&amp;amp;quot;rugarch&amp;amp;quot;,&amp;amp;quot;rmgarch&amp;amp;quot;))   # only needed in case you have not yet installed these packages&lt;br /&gt;
library(quantmod)&lt;br /&gt;
library(rugarch)&lt;br /&gt;
library(rmgarch)&amp;lt;/pre&amp;gt;&lt;br /&gt;
Next we set our working directory&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# replace with your directory and uncomment&lt;br /&gt;
# setwd(&amp;amp;quot;YOUR/COPLETE/DIRECTORY/PATH&amp;amp;quot;) &amp;lt;/pre&amp;gt;&lt;br /&gt;
= Data upload =&lt;br /&gt;
&lt;br /&gt;
Here we will use a convenient data retrieval function (&amp;lt;code&amp;gt;getSymbols&amp;lt;/code&amp;gt;) delivered by the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package in order to retrieve some data. This function works, for instance, to retrieve stock data. The default source is [https://finance.yahoo.com/ Yahoo Finance]. If you want to find out what stock has which symbol you should be able to search the internet to find a list of ticker symbols. The following shows how to use the function. But note that my experience is that sometimes the connection does not work and you may get an error message. In that case just retry a few seconds later and it may well work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;startDate = as.Date(&amp;amp;quot;2007-01-03&amp;amp;quot;) #Specify period of time we are interested in&lt;br /&gt;
endDate = as.Date(&amp;amp;quot;2018-04-30&amp;amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
getSymbols(&amp;amp;quot;IBM&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;IBM&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;GOOG&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;GOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;BP&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;BP&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
In your environment you can see that each of these commands loads an object with the respective ticker symbol name. Let&amp;#039;s have a look at one of these dataframes to understand what data these are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;head(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            IBM.Open IBM.High IBM.Low IBM.Close IBM.Volume IBM.Adjusted&lt;br /&gt;
## 2007-01-03    97.18    98.40   96.26     97.27    9196800     73.41806&lt;br /&gt;
## 2007-01-04    97.25    98.79   96.88     98.31   10524500     74.20306&lt;br /&gt;
## 2007-01-05    97.60    97.95   96.91     97.42    7221300     73.53130&lt;br /&gt;
## 2007-01-08    98.50    99.50   98.35     98.90   10340000     74.64834&lt;br /&gt;
## 2007-01-09    99.08   100.33   99.07    100.07   11108200     75.53147&lt;br /&gt;
## 2007-01-10    98.50    99.05   97.93     98.89    8744800     74.64082&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## An &amp;#039;xts&amp;#039; object on 2007-01-03/2018-04-27 containing:&lt;br /&gt;
##   Data: num [1:2850, 1:6] 97.2 97.2 97.6 98.5 99.1 ...&lt;br /&gt;
##  - attr(*, &amp;amp;quot;dimnames&amp;amp;quot;)=List of 2&lt;br /&gt;
##   ..$ : NULL&lt;br /&gt;
##   ..$ : chr [1:6] &amp;amp;quot;IBM.Open&amp;amp;quot; &amp;amp;quot;IBM.High&amp;amp;quot; &amp;amp;quot;IBM.Low&amp;amp;quot; &amp;amp;quot;IBM.Close&amp;amp;quot; ...&lt;br /&gt;
##   Indexed by objects of class: [Date] TZ: UTC&lt;br /&gt;
##   xts Attributes:  &lt;br /&gt;
## List of 2&lt;br /&gt;
##  $ src    : chr &amp;amp;quot;yahoo&amp;amp;quot;&lt;br /&gt;
##  $ updated: POSIXct[1:1], format: &amp;amp;quot;2018-05-03 22:21:00&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can see that this object contains a range of daily observations (&amp;lt;code&amp;gt;Open&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;High&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Close&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Volume&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Adjusted&amp;lt;/code&amp;gt; share price). We also learn that the object is formatted as an &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; object. &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; is a type of time-series format and indeed we learn that the data range from 2007-01-03 to 2018-04-30.&lt;br /&gt;
&lt;br /&gt;
You can in fact produce a somewhat fancy looking chart with the following command (also part of the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;chartSeries(GOOG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GoogleChart1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When we are estimating volatility models we work with returns. There is a function that transforms the data to returns.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;rIBM &amp;amp;lt;- dailyReturn(IBM)&lt;br /&gt;
rBP &amp;amp;lt;- dailyReturn(BP)&lt;br /&gt;
rGOOG &amp;amp;lt;- dailyReturn(GOOG)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# We put all data into a data frame for use in the multivariate model&lt;br /&gt;
rX &amp;amp;lt;- data.frame(rIBM, rBP, rGOOG)&lt;br /&gt;
names(rX)[1] &amp;amp;lt;- &amp;amp;quot;rIBM&amp;amp;quot;&lt;br /&gt;
names(rX)[2] &amp;amp;lt;- &amp;amp;quot;rBP&amp;amp;quot;&lt;br /&gt;
names(rX)[3] &amp;amp;lt;- &amp;amp;quot;rGOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
There is also a &amp;lt;code&amp;gt;weeklyReturn&amp;lt;/code&amp;gt; function in case that is what you are interested in.&lt;br /&gt;
&lt;br /&gt;
= Univariate GARCH Model =&lt;br /&gt;
&lt;br /&gt;
Here we are using the functionality provided by the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package written by Alexios Galanos.&lt;br /&gt;
&lt;br /&gt;
== Model Specification ==&lt;br /&gt;
&lt;br /&gt;
The first thing you need to do is to ensure you know what type of GARCH model you want to estimate and then let R know about this. It is the &amp;lt;code&amp;gt;ugarchspec( )&amp;lt;/code&amp;gt; function which is used to let R know about the model type. There is in fact a default specification and the way to invoke this is as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec = ugarchspec()&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; is now a list which contains all the relevant model specifications. Let&amp;#039;s look at them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       GARCH Model Spec          *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## GARCH Model      : sGARCH(1,1)&lt;br /&gt;
## Variance Targeting   : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Mean Dynamics&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Mean Model       : ARFIMA(1,0,1)&lt;br /&gt;
## Include Mean     : TRUE &lt;br /&gt;
## GARCH-in-Mean        : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Distribution&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Distribution :  norm &lt;br /&gt;
## Includes Skew    :  FALSE &lt;br /&gt;
## Includes Shape   :  FALSE &lt;br /&gt;
## Includes Lambda  :  FALSE&amp;lt;/pre&amp;gt;&lt;br /&gt;
The key issues here are the spec for the &amp;lt;code&amp;gt;Mean Model&amp;lt;/code&amp;gt; (here an ARMA(1,1) model) and the specification for the &amp;lt;code&amp;gt;GARCH Model&amp;lt;/code&amp;gt;, here an &amp;lt;code&amp;gt;sGARCH(1,1)&amp;lt;/code&amp;gt; which is basically a GARCH(1,1). To get details on all the possible specifications and how to change them it is best to consult the [https://cran.r-project.org/web/packages/rugarch/vignettes/Introduction_to_the_rugarch_package.pdf documentation] of the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s say you want to change the mean model from an ARMA(1,1) to an ARMA(1,0), i.e. an AR(1) model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec &amp;amp;lt;- ugarchspec(mean.model=list(armaOrder=c(1,0)))&amp;lt;/pre&amp;gt;&lt;br /&gt;
You could call &amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; again to check that the model specification has actually changed.&lt;br /&gt;
&lt;br /&gt;
The following is the specification for an # an example of the EWMA Model (although we will not use it below).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ewma_spec = ugarchspec(variance.model=list(model=&amp;amp;quot;iGARCH&amp;amp;quot;, garchOrder=c(1,1)), &lt;br /&gt;
        mean.model=list(armaOrder=c(0,0), include.mean=TRUE),  &lt;br /&gt;
        distribution.model=&amp;amp;quot;norm&amp;amp;quot;, fixed.pars=list(omega=0))&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now that we have specified a model to estimate we need to find the best arameters, i.e. we need to estimate the model. This step is achieved by the &amp;lt;code&amp;gt;ugarchfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit = ugarchfit(spec = ug_spec, data = rIBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;fit&amp;lt;/code&amp;gt; is now a list that contains a range of results from the estimation. Let&amp;#039;s have a look at the results&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *          GARCH Model Fit        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## GARCH Model  : sGARCH(1,1)&lt;br /&gt;
## Mean Model   : ARFIMA(1,0,0)&lt;br /&gt;
## Distribution : norm &lt;br /&gt;
## &lt;br /&gt;
## Optimal Parameters&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##         Estimate  Std. Error   t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000220   1.55666  0.11955&lt;br /&gt;
## ar1    -0.013463    0.021425  -0.62835  0.52978&lt;br /&gt;
## omega   0.000015    0.000002   6.56888  0.00000&lt;br /&gt;
## alpha1  0.111158    0.006440  17.25930  0.00000&lt;br /&gt;
## beta1   0.809517    0.005883 137.59775  0.00000&lt;br /&gt;
## &lt;br /&gt;
## Robust Standard Errors:&lt;br /&gt;
##         Estimate  Std. Error  t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000230  1.48654 0.137136&lt;br /&gt;
## ar1    -0.013463    0.019583 -0.68748 0.491782&lt;br /&gt;
## omega   0.000015    0.000012  1.25867 0.208150&lt;br /&gt;
## alpha1  0.111158    0.054637  2.03450 0.041901&lt;br /&gt;
## beta1   0.809517    0.082783  9.77876 0.000000&lt;br /&gt;
## &lt;br /&gt;
## LogLikelihood : 8364.692 &lt;br /&gt;
## &lt;br /&gt;
## Information Criteria&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                     &lt;br /&gt;
## Akaike       -5.8665&lt;br /&gt;
## Bayes        -5.8560&lt;br /&gt;
## Shibata      -5.8665&lt;br /&gt;
## Hannan-Quinn -5.8627&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                    0.03483  0.8519&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][2]   0.03492  1.0000&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][5]   1.39601  0.8712&lt;br /&gt;
## d.o.f=1&lt;br /&gt;
## H0 : No serial correlation&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Squared Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                     0.2509  0.6165&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][5]    1.2795  0.7938&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][9]    1.9518  0.9107&lt;br /&gt;
## d.o.f=2&lt;br /&gt;
## &lt;br /&gt;
## Weighted ARCH LM Tests&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##             Statistic Shape Scale P-Value&lt;br /&gt;
## ARCH Lag[3]     1.295 0.500 2.000  0.2551&lt;br /&gt;
## ARCH Lag[5]     1.603 1.440 1.667  0.5656&lt;br /&gt;
## ARCH Lag[7]     1.935 2.315 1.543  0.7312&lt;br /&gt;
## &lt;br /&gt;
## Nyblom stability test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Joint Statistic:  26.6709&lt;br /&gt;
## Individual Statistics:              &lt;br /&gt;
## mu     0.42613&lt;br /&gt;
## ar1    0.06712&lt;br /&gt;
## omega  0.89209&lt;br /&gt;
## alpha1 0.55216&lt;br /&gt;
## beta1  0.15390&lt;br /&gt;
## &lt;br /&gt;
## Asymptotic Critical Values (10% 5% 1%)&lt;br /&gt;
## Joint Statistic:          1.28 1.47 1.88&lt;br /&gt;
## Individual Statistic:     0.35 0.47 0.75&lt;br /&gt;
## &lt;br /&gt;
## Sign Bias Test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                    t-value   prob sig&lt;br /&gt;
## Sign Bias           0.2134 0.8310    &lt;br /&gt;
## Negative Sign Bias  1.0137 0.3108    &lt;br /&gt;
## Positive Sign Bias  0.4427 0.6580    &lt;br /&gt;
## Joint Effect        1.6909 0.6390    &lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Adjusted Pearson Goodness-of-Fit Test:&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##   group statistic p-value(g-1)&lt;br /&gt;
## 1    20     135.6    1.285e-19&lt;br /&gt;
## 2    30     139.3    2.301e-16&lt;br /&gt;
## 3    40     161.8    6.871e-17&lt;br /&gt;
## 4    50     166.2    1.164e-14&lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Elapsed time : 0.7440431&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are familiar with GARCH models you will recognise some of the parameters. &amp;lt;code&amp;gt;ar1&amp;lt;/code&amp;gt; is the AR1 coefficient of the mean model (here very small and basically insignificant), &amp;lt;code&amp;gt;alpha1&amp;lt;/code&amp;gt; is the coefficient to the squared residuals in the GARCH equation and &amp;lt;code&amp;gt;beta1&amp;lt;/code&amp;gt; is the coefficient to the lagged variance.&lt;br /&gt;
&lt;br /&gt;
Often you will want to use model output for some further analysis. It is therefore important to understand how to extract information such as the parameter estimates, their standard errors or the residuals. The object &amp;lt;code&amp;gt;ugfit&amp;lt;/code&amp;gt; contains all the information. In that object you can find two drawers (or in technical speak slots, @fit and @model). Each of these drawers contains a range of different things. What they contain you can figure out by asking for the element names&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @model slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @model slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@model)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;modelinc&amp;amp;quot;   &amp;amp;quot;modeldesc&amp;amp;quot;  &amp;amp;quot;modeldata&amp;amp;quot;  &amp;amp;quot;pars&amp;amp;quot;       &amp;amp;quot;start.pars&amp;amp;quot;&lt;br /&gt;
##  [6] &amp;amp;quot;fixed.pars&amp;amp;quot; &amp;amp;quot;maxOrder&amp;amp;quot;   &amp;amp;quot;pos.matrix&amp;amp;quot; &amp;amp;quot;fmodel&amp;amp;quot;     &amp;amp;quot;pidx&amp;amp;quot;      &lt;br /&gt;
## [11] &amp;amp;quot;n.start&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @fit slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @fit slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@fit)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;hessian&amp;amp;quot;         &amp;amp;quot;cvar&amp;amp;quot;            &amp;amp;quot;var&amp;amp;quot;            &lt;br /&gt;
##  [4] &amp;amp;quot;sigma&amp;amp;quot;           &amp;amp;quot;condH&amp;amp;quot;           &amp;amp;quot;z&amp;amp;quot;              &lt;br /&gt;
##  [7] &amp;amp;quot;LLH&amp;amp;quot;             &amp;amp;quot;log.likelihoods&amp;amp;quot; &amp;amp;quot;residuals&amp;amp;quot;      &lt;br /&gt;
## [10] &amp;amp;quot;coef&amp;amp;quot;            &amp;amp;quot;robust.cvar&amp;amp;quot;     &amp;amp;quot;A&amp;amp;quot;              &lt;br /&gt;
## [13] &amp;amp;quot;B&amp;amp;quot;               &amp;amp;quot;scores&amp;amp;quot;          &amp;amp;quot;se.coef&amp;amp;quot;        &lt;br /&gt;
## [16] &amp;amp;quot;tval&amp;amp;quot;            &amp;amp;quot;matcoef&amp;amp;quot;         &amp;amp;quot;robust.se.coef&amp;amp;quot; &lt;br /&gt;
## [19] &amp;amp;quot;robust.tval&amp;amp;quot;     &amp;amp;quot;robust.matcoef&amp;amp;quot;  &amp;amp;quot;fitted.values&amp;amp;quot;  &lt;br /&gt;
## [22] &amp;amp;quot;convergence&amp;amp;quot;     &amp;amp;quot;kappa&amp;amp;quot;           &amp;amp;quot;persistence&amp;amp;quot;    &lt;br /&gt;
## [25] &amp;amp;quot;timer&amp;amp;quot;           &amp;amp;quot;ipars&amp;amp;quot;           &amp;amp;quot;solver&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you wanted to extract the estimated coefficients you would do that in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit@fit$coef&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            mu           ar1         omega        alpha1         beta1 &lt;br /&gt;
##  3.419000e-04 -1.346260e-02  1.516946e-05  1.111584e-01  8.095171e-01&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var &amp;amp;lt;- ugfit@fit$var   # save the estimated conditional variances&lt;br /&gt;
ug_res2 &amp;amp;lt;- (ugfit@fit$residuals)^2   # save the estimated squared residuals&amp;lt;/pre&amp;gt;&lt;br /&gt;
Let&amp;#039;s plot the squared residuals and the estimated conditional variance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(ug_res2, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_var, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:CondVar2.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Forecasting ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use an estimated model to subsequently forecast the conditional variance. The function used for this purpose is the &amp;lt;code&amp;gt;ugarchforecast&amp;lt;/code&amp;gt; function. The application is rather straightforward:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfore &amp;amp;lt;- ugarchforecast(ugfit, n.ahead = 10)&lt;br /&gt;
ugfore&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## *       GARCH Model Forecast         *&lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## Model: sGARCH&lt;br /&gt;
## Horizon: 10&lt;br /&gt;
## Roll Steps: 0&lt;br /&gt;
## Out of Sample: 0&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast [T0=2018-04-27]:&lt;br /&gt;
##         Series   Sigma&lt;br /&gt;
## T+1  0.0003685 0.01640&lt;br /&gt;
## T+2  0.0003415 0.01621&lt;br /&gt;
## T+3  0.0003419 0.01604&lt;br /&gt;
## T+4  0.0003419 0.01587&lt;br /&gt;
## T+5  0.0003419 0.01572&lt;br /&gt;
## T+6  0.0003419 0.01558&lt;br /&gt;
## T+7  0.0003419 0.01545&lt;br /&gt;
## T+8  0.0003419 0.01533&lt;br /&gt;
## T+9  0.0003419 0.01521&lt;br /&gt;
## T+10 0.0003419 0.01511&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see we have produced forecasts for the next ten days, both for the expected returns (&amp;lt;code&amp;gt;Series&amp;lt;/code&amp;gt;) and for the conditional volatility (square root of the conditional variance). Similar to the object created for model fitting, &amp;lt;code&amp;gt;ugfore&amp;lt;/code&amp;gt; contains two slots (@model and @forecast) and you can use &amp;lt;code&amp;gt;names(ugfore@forecast)&amp;lt;/code&amp;gt; to figure out under which names the elements are saved. For instance you can extract the conditional volatility forecast as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_f &amp;amp;lt;- ugfore@forecast$sigmaFor&lt;br /&gt;
plot(ug_f, type = &amp;amp;quot;l&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:ug_forecast3.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the volatility is the square root of the conditional variance.&lt;br /&gt;
&lt;br /&gt;
To put these forecasts into context let&amp;#039;s display them together with the last 50 observations used in the estimation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var_t &amp;amp;lt;- c(tail(ug_var,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_res2_t &amp;amp;lt;- c(tail(ug_res2,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_f &amp;amp;lt;- c(rep(NA,20),(ug_f)^2)&lt;br /&gt;
&lt;br /&gt;
plot(ug_res2_t, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_f, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
lines(ug_var_t, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:ug_forecast4.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see how the forecast of the conditional variance picks up from the last estimated conditional variance. In fact it decreases from there, slowly, towards the unconditional variance value.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package has a lot of additional functionality which you can explore through the documentation.&lt;br /&gt;
&lt;br /&gt;
= Multivariate GARCH models =&lt;br /&gt;
&lt;br /&gt;
Often you will want to model the volatility of a vector of assets. This can be done with the multivariate equivalent of the univariate GARCH model. Estimating multivariate GARCH models turns out to be significantly more difficult than univariate GARCH models, but fortunately procedures have been developed that deal with most of these issues.&lt;br /&gt;
&lt;br /&gt;
Here we are using the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package which has a lot of useful functionality. We are applying it to estimate a multivariate volatility model for the returns of BP, Google/Alphabet and IBM shares.&lt;br /&gt;
&lt;br /&gt;
As for the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package we first need to specify the model we want to estimate. Here we stick with a Dynamic Conditional Correlation (DCC) model (see the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for details.). When estimating DCC models one basically estimates individual GARCH-type models (which could differ for each individual asset). These are then used to standardise the individual residuals. As a second step one then has to specify the correlation dynamics of these standardised residuals. It is possible to estimate the parameters of the univariate and the correlation model in one big swoop. however, my experience with this, and other packages, is that it is beneficial to separate the two steps.&lt;br /&gt;
&lt;br /&gt;
== Model Setup ==&lt;br /&gt;
&lt;br /&gt;
Here we assume that we are using the same univariate volatility model specification for each of the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# DCC (MVN)&lt;br /&gt;
uspec.n = multispec(replicate(3, ugarchspec(mean.model = list(armaOrder = c(1,0)))))&amp;lt;/pre&amp;gt;&lt;br /&gt;
What does this command do? You will recognise that &amp;lt;code&amp;gt;ugarchspec(mean.model = list(armaOrder = c(1,0)))&amp;lt;/code&amp;gt; specifies an AR(1)-GARCH(1,1) model. By using &amp;lt;code&amp;gt;replicate(3, ugarchspec...)&amp;lt;/code&amp;gt; we replicate this model 3 times (as we have three assets, IBM, Google/Alphabet and BP).&lt;br /&gt;
&lt;br /&gt;
We now estimate these univariate GARCH models using the &amp;lt;code&amp;gt;multifit&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;multf = multifit(uspec.n, rX)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The results are saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; and you can type &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; into the command window to see the estimated parameters for these three models. But we will here proceed to specify the DCC model (I assume that you know what a DCC model is. This is not the place to elaborate on this and many textbooks or indeed the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] to this package provide details). To specify the correlation specification we use the &amp;lt;code&amp;gt;dccspec&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;spec1 = dccspec(uspec = uspec.n, dccOrder = c(1, 1), distribution = &amp;#039;mvnorm&amp;#039;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this specification we have to state how the univariate volatilities are modeled (as per &amp;lt;code&amp;gt;uspec.n&amp;lt;/code&amp;gt;) and how complex the dynamic structure of the correlation matrix is (here we are using the most standard &amp;lt;code&amp;gt;dccOrder = c(1, 1)&amp;lt;/code&amp;gt; specification).&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now we are in a position to estimate the model using the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;fit1 = dccfit(spec1, data = rX, fit.control = list(eval.se = TRUE), fit = multf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
We want to estimate the model as specified in &amp;lt;code&amp;gt;spec1&amp;lt;/code&amp;gt;, using the data in &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt;. The option &amp;lt;code&amp;gt;fit.control = list(eval.se = TRUE)&amp;lt;/code&amp;gt; ensures that the estimation procedure produces standard errors for estimated parameters. Importantly &amp;lt;code&amp;gt;fit = multf&amp;lt;/code&amp;gt; indicates that we ought to use the already estimated univariate models as they were saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt;. The way to learn how to use these functions is by a combination of looking at the functions&amp;#039;s help (&amp;lt;code&amp;gt;?dccfit&amp;lt;/code&amp;gt;) and googling.&lt;br /&gt;
&lt;br /&gt;
When you estimate a multivariate volatility model like the DCC model you are typically interested in the estimated covariance or correlation matrices. After all it is at the core of these models that you allow for time-variation in the correlation between the assets (there are also constant correlation models, but we do not discuss this here). Therefore we will now learn how we extract these.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# Get the model based time varying covariance (arrays) and correlation matrices&lt;br /&gt;
cov1 = rcov(fit1)  # extracts the covariance matrix&lt;br /&gt;
cor1 = rcor(fit1)  # extracts the correlation matrix&amp;lt;/pre&amp;gt;&lt;br /&gt;
To understand the object we have at our hands here we can have a look at the imension:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dim(cor1)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1]    3    3 2850&amp;lt;/pre&amp;gt;&lt;br /&gt;
We get three outputs which tells us that we have a three dimensional object. The firts two dimensions have 3 elements each (think a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; correlation matrix) and then there is a third dimension with 2850 elements. This tells us that &amp;lt;code&amp;gt;cor1&amp;lt;/code&amp;gt; stores 2850 (&amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt;) sorrelation matrices, one for each day of data.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s have a look at the correlation matrix for the last day, day 2853;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor1[,,dim(cor1)[3]]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            rIBM       rBP    rGOOG&lt;br /&gt;
## rIBM  1.0000000 0.2424297 0.353591&lt;br /&gt;
## rBP   0.2424297 1.0000000 0.275244&lt;br /&gt;
## rGOOG 0.3535910 0.2752440 1.000000&amp;lt;/pre&amp;gt;&lt;br /&gt;
So let&amp;#039;s say we want to plot the time-varying correlation between Google and BP, which is 0.275244 on that last day. In our matrix with returns &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt; BP is the second asset and Google the 3rd. So in any particular correlation matrix we want the element in row 2 and column 3.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor_BG &amp;amp;lt;- cor1[2,1,]   # leaving the last dimension empty implies that we want all elements&lt;br /&gt;
cor_BG &amp;amp;lt;- as.xts(cor_BG)  # imposes the xts time series format - useful for plotting&amp;lt;/pre&amp;gt;&lt;br /&gt;
And now we plot this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(cor_BG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:correlation1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you transformed &amp;lt;code&amp;gt;cor_BG&amp;lt;/code&amp;gt; to be a &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; series the &amp;lt;code&amp;gt;plot&amp;lt;/code&amp;gt; function automatically picks up the date information. As you can see there is significant variation through time with the correaltion typically varying between 0.2 and 0.5.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s plot all three correlations between the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
plot(as.xts(cor1[1,2,]),main=&amp;amp;quot;IBM and BP&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[1,3,]),main=&amp;amp;quot;IBM and Google&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[2,3,]),main=&amp;amp;quot;BP and Google&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:correlation2.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Forecasts ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use your estimated model to produce forecasts for the covariance or correlation matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dccf1 &amp;amp;lt;- dccforecast(fit1, n.ahead = 10)&lt;br /&gt;
dccf1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       DCC GARCH Forecast        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Distribution         :  mvnorm&lt;br /&gt;
## Model                :  DCC(1,1)&lt;br /&gt;
## Horizon              :  10&lt;br /&gt;
## Roll Steps           :  0&lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast: &lt;br /&gt;
## &lt;br /&gt;
## First 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2539 0.3562&lt;br /&gt;
## [2,] 0.2539 1.0000 0.2883&lt;br /&gt;
## [3,] 0.3562 0.2883 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2658 0.3587&lt;br /&gt;
## [2,] 0.2658 1.0000 0.2909&lt;br /&gt;
## [3,] 0.3587 0.2909 1.0000&lt;br /&gt;
## &lt;br /&gt;
## . . .&lt;br /&gt;
## . . .&lt;br /&gt;
## &lt;br /&gt;
## Last 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3202 0.3703&lt;br /&gt;
## [2,] 0.3202 1.0000 0.3027&lt;br /&gt;
## [3,] 0.3703 0.3027 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3250 0.3714&lt;br /&gt;
## [2,] 0.3250 1.0000 0.3037&lt;br /&gt;
## [3,] 0.3714 0.3037 1.0000&amp;lt;/pre&amp;gt;&lt;br /&gt;
The actual forecasts for the correlation can be addresse via&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;Rf &amp;amp;lt;- dccf1@mforecast$R    # use H for the covariance forecast&amp;lt;/pre&amp;gt;&lt;br /&gt;
When checking the structure of &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(Rf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## List of 1&lt;br /&gt;
##  $ : num [1:3, 1:3, 1:10] 1 0.254 0.356 0.254 1 ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
you realise that the object &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt; is a list with one element. It turns out that this one list item is then a 3 dimensional matrix/array which contains the the 10 forecasts of &amp;lt;math&amp;gt;3 \times 3&amp;lt;/math&amp;gt; correlation matrices. If we want to extract, say, the 10 forecasts for the correlation between IBM (1st asset) and BP (2nd asset), we have to do this in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;corf_IB &amp;amp;lt;- Rf[[1]][1,2,]  # Correlation forecasts between IBM and BP&lt;br /&gt;
corf_IG &amp;amp;lt;- Rf[[1]][1,3,]  # Correlation forecasts between IBM and Google&lt;br /&gt;
corf_BG &amp;amp;lt;- Rf[[1]][2,3,]  # Correlation forecasts between BP and Google&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;[ [1] ]&amp;lt;/code&amp;gt; tells R to go to the first (and here only) list item and then &amp;lt;code&amp;gt;[1,2,]&amp;lt;/code&amp;gt; instructs R to select the (1,2) element of all available correlation matrices.&lt;br /&gt;
&lt;br /&gt;
As for the univariate volatililty model let us display the forecast along with the last in-sample estimates of correlation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
c_IB &amp;amp;lt;- c(tail(cor1[1,2,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IB &amp;amp;lt;- c(rep(NA,20),corf_IB) # gets the 10 forecasts&lt;br /&gt;
plot(c_IB,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and BP&amp;amp;quot;)&lt;br /&gt;
lines(cf_IB,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_IG &amp;amp;lt;- c(tail(cor1[1,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IG &amp;amp;lt;- c(rep(NA,20),corf_IG) # gets the 10 forecasts&lt;br /&gt;
plot(c_IG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_IG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_BG &amp;amp;lt;- c(tail(cor1[2,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_BG &amp;amp;lt;- c(rep(NA,20),corf_BG) # gets the 10 forecasts&lt;br /&gt;
plot(c_BG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation BP and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_BG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:mg_forecast.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Further thoughts =&lt;br /&gt;
&lt;br /&gt;
If you are looking at using pseudo-out-of sample forecasting (i.e. pretend to forecast values that actually have already occured) you should explore the &amp;lt;code&amp;gt;out.sample&amp;lt;/code&amp;gt; option of the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package also allows you to estimate multivariate factor GARCH models and copula GARCH models (check the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for more details.&lt;br /&gt;
&lt;br /&gt;
An alternative package with a slightly different set of multivariate volatility models is the `&amp;lt;code&amp;gt;ccgarch&amp;lt;/code&amp;gt; package.&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4252</id>
		<title>R GARCH</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4252"/>
				<updated>2018-05-03T23:24:13Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /* Forecasts */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
When you are dealing with financial time-series we often have relatively high frequency observations available. It is very common for instance to have daily observations available. In fact it is now possible to obtain hourly, minute, second or even millisecond observations. But here we will restrict ourselves to daily observations. For some assets these will be 7 days a week observations, but for others these will be work-day observations, so typically 5 days a week of observations.&lt;br /&gt;
&lt;br /&gt;
= Packages used =&lt;br /&gt;
&lt;br /&gt;
There are a number of packages that can enable us to estimate volatility models. The packages we will use are the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; for univariate GARCH models and the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; (for multivariate models) package both written by Alexios Ghalanos. We shall also use the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package as it will give us some easy access to some standard financial data.&lt;br /&gt;
&lt;br /&gt;
So please ensure that you install these packes and then load them,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;#install.packages(c(&amp;amp;quot;quantmod&amp;amp;quot;,&amp;amp;quot;rugarch&amp;amp;quot;,&amp;amp;quot;rmgarch&amp;amp;quot;))   # only needed in case you have not yet installed these packages&lt;br /&gt;
library(quantmod)&lt;br /&gt;
library(rugarch)&lt;br /&gt;
library(rmgarch)&amp;lt;/pre&amp;gt;&lt;br /&gt;
Next we set our working directory&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# replace with your directory and uncomment&lt;br /&gt;
# setwd(&amp;amp;quot;YOUR/COPLETE/DIRECTORY/PATH&amp;amp;quot;) &amp;lt;/pre&amp;gt;&lt;br /&gt;
= Data upload =&lt;br /&gt;
&lt;br /&gt;
Here we will use a convenient data retrieval function (&amp;lt;code&amp;gt;getSymbols&amp;lt;/code&amp;gt;) delivered by the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package in order to retrieve some data. This function works, for instance, to retrieve stock data. The default source is [https://finance.yahoo.com/ Yahoo Finance]. If you want to find out what stock has which symbol you should be able to search the internet to find a list of ticker symbols. The following shows how to use the function. But note that my experience is that sometimes the connection does not work and you may get an error message. In that case just retry a few seconds later and it may well work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;startDate = as.Date(&amp;amp;quot;2007-01-03&amp;amp;quot;) #Specify period of time we are interested in&lt;br /&gt;
endDate = as.Date(&amp;amp;quot;2018-04-30&amp;amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
getSymbols(&amp;amp;quot;IBM&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;IBM&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;GOOG&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;GOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;BP&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;BP&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
In your environment you can see that each of these commands loads an object with the respective ticker symbol name. Let&amp;#039;s have a look at one of these dataframes to understand what data these are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;head(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            IBM.Open IBM.High IBM.Low IBM.Close IBM.Volume IBM.Adjusted&lt;br /&gt;
## 2007-01-03    97.18    98.40   96.26     97.27    9196800     73.41806&lt;br /&gt;
## 2007-01-04    97.25    98.79   96.88     98.31   10524500     74.20306&lt;br /&gt;
## 2007-01-05    97.60    97.95   96.91     97.42    7221300     73.53130&lt;br /&gt;
## 2007-01-08    98.50    99.50   98.35     98.90   10340000     74.64834&lt;br /&gt;
## 2007-01-09    99.08   100.33   99.07    100.07   11108200     75.53147&lt;br /&gt;
## 2007-01-10    98.50    99.05   97.93     98.89    8744800     74.64082&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## An &amp;#039;xts&amp;#039; object on 2007-01-03/2018-04-27 containing:&lt;br /&gt;
##   Data: num [1:2850, 1:6] 97.2 97.2 97.6 98.5 99.1 ...&lt;br /&gt;
##  - attr(*, &amp;amp;quot;dimnames&amp;amp;quot;)=List of 2&lt;br /&gt;
##   ..$ : NULL&lt;br /&gt;
##   ..$ : chr [1:6] &amp;amp;quot;IBM.Open&amp;amp;quot; &amp;amp;quot;IBM.High&amp;amp;quot; &amp;amp;quot;IBM.Low&amp;amp;quot; &amp;amp;quot;IBM.Close&amp;amp;quot; ...&lt;br /&gt;
##   Indexed by objects of class: [Date] TZ: UTC&lt;br /&gt;
##   xts Attributes:  &lt;br /&gt;
## List of 2&lt;br /&gt;
##  $ src    : chr &amp;amp;quot;yahoo&amp;amp;quot;&lt;br /&gt;
##  $ updated: POSIXct[1:1], format: &amp;amp;quot;2018-05-03 22:21:00&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can see that this object contains a range of daily observations (&amp;lt;code&amp;gt;Open&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;High&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Close&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Volume&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Adjusted&amp;lt;/code&amp;gt; share price). We also learn that the object is formatted as an &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; object. &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; is a type of time-series format and indeed we learn that the data range from 2007-01-03 to 2018-04-30.&lt;br /&gt;
&lt;br /&gt;
You can in fact produce a somewhat fancy looking chart with the following command (also part of the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;chartSeries(GOOG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GoogleChart1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When we are estimating volatility models we work with returns. There is a function that transforms the data to returns.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;rIBM &amp;amp;lt;- dailyReturn(IBM)&lt;br /&gt;
rBP &amp;amp;lt;- dailyReturn(BP)&lt;br /&gt;
rGOOG &amp;amp;lt;- dailyReturn(GOOG)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# We put all data into a data frame for use in the multivariate model&lt;br /&gt;
rX &amp;amp;lt;- data.frame(rIBM, rBP, rGOOG)&lt;br /&gt;
names(rX)[1] &amp;amp;lt;- &amp;amp;quot;rIBM&amp;amp;quot;&lt;br /&gt;
names(rX)[2] &amp;amp;lt;- &amp;amp;quot;rBP&amp;amp;quot;&lt;br /&gt;
names(rX)[3] &amp;amp;lt;- &amp;amp;quot;rGOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
There is also a &amp;lt;code&amp;gt;weeklyReturn&amp;lt;/code&amp;gt; function in case that is what you are interested in.&lt;br /&gt;
&lt;br /&gt;
= Univariate GARCH Model =&lt;br /&gt;
&lt;br /&gt;
Here we are using the functionality provided by the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package written by Alexios Galanos.&lt;br /&gt;
&lt;br /&gt;
== Model Specification ==&lt;br /&gt;
&lt;br /&gt;
The first thing you need to do is to ensure you know what type of GARCH model you want to estimate and then let R know about this. It is the &amp;lt;code&amp;gt;ugarchspec( )&amp;lt;/code&amp;gt; function which is used to let R know about the model type. There is in fact a default specification and the way to invoke this is as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec = ugarchspec()&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; is now a list which contains all the relevant model specifications. Let&amp;#039;s look at them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       GARCH Model Spec          *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## GARCH Model      : sGARCH(1,1)&lt;br /&gt;
## Variance Targeting   : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Mean Dynamics&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Mean Model       : ARFIMA(1,0,1)&lt;br /&gt;
## Include Mean     : TRUE &lt;br /&gt;
## GARCH-in-Mean        : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Distribution&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Distribution :  norm &lt;br /&gt;
## Includes Skew    :  FALSE &lt;br /&gt;
## Includes Shape   :  FALSE &lt;br /&gt;
## Includes Lambda  :  FALSE&amp;lt;/pre&amp;gt;&lt;br /&gt;
The key issues here are the spec for the &amp;lt;code&amp;gt;Mean Model&amp;lt;/code&amp;gt; (here an ARMA(1,1) model) and the specification for the &amp;lt;code&amp;gt;GARCH Model&amp;lt;/code&amp;gt;, here an &amp;lt;code&amp;gt;sGARCH(1,1)&amp;lt;/code&amp;gt; which is basically a GARCH(1,1). To get details on all the possible specifications and how to change them it is best to consult the [https://cran.r-project.org/web/packages/rugarch/vignettes/Introduction_to_the_rugarch_package.pdf documentation] of the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s say you want to change the mean model from an ARMA(1,1) to an ARMA(1,0), i.e. an AR(1) model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec &amp;amp;lt;- ugarchspec(mean.model=list(armaOrder=c(1,0)))&amp;lt;/pre&amp;gt;&lt;br /&gt;
You could call &amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; again to check that the model specification has actually changed.&lt;br /&gt;
&lt;br /&gt;
The following is the specification for an # an example of the EWMA Model (although we will not use it below).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ewma_spec = ugarchspec(variance.model=list(model=&amp;amp;quot;iGARCH&amp;amp;quot;, garchOrder=c(1,1)), &lt;br /&gt;
        mean.model=list(armaOrder=c(0,0), include.mean=TRUE),  &lt;br /&gt;
        distribution.model=&amp;amp;quot;norm&amp;amp;quot;, fixed.pars=list(omega=0))&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now that we have specified a model to estimate we need to find the best arameters, i.e. we need to estimate the model. This step is achieved by the &amp;lt;code&amp;gt;ugarchfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit = ugarchfit(spec = ug_spec, data = rIBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;fit&amp;lt;/code&amp;gt; is now a list that contains a range of results from the estimation. Let&amp;#039;s have a look at the results&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *          GARCH Model Fit        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## GARCH Model  : sGARCH(1,1)&lt;br /&gt;
## Mean Model   : ARFIMA(1,0,0)&lt;br /&gt;
## Distribution : norm &lt;br /&gt;
## &lt;br /&gt;
## Optimal Parameters&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##         Estimate  Std. Error   t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000220   1.55666  0.11955&lt;br /&gt;
## ar1    -0.013463    0.021425  -0.62835  0.52978&lt;br /&gt;
## omega   0.000015    0.000002   6.56888  0.00000&lt;br /&gt;
## alpha1  0.111158    0.006440  17.25930  0.00000&lt;br /&gt;
## beta1   0.809517    0.005883 137.59775  0.00000&lt;br /&gt;
## &lt;br /&gt;
## Robust Standard Errors:&lt;br /&gt;
##         Estimate  Std. Error  t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000230  1.48654 0.137136&lt;br /&gt;
## ar1    -0.013463    0.019583 -0.68748 0.491782&lt;br /&gt;
## omega   0.000015    0.000012  1.25867 0.208150&lt;br /&gt;
## alpha1  0.111158    0.054637  2.03450 0.041901&lt;br /&gt;
## beta1   0.809517    0.082783  9.77876 0.000000&lt;br /&gt;
## &lt;br /&gt;
## LogLikelihood : 8364.692 &lt;br /&gt;
## &lt;br /&gt;
## Information Criteria&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                     &lt;br /&gt;
## Akaike       -5.8665&lt;br /&gt;
## Bayes        -5.8560&lt;br /&gt;
## Shibata      -5.8665&lt;br /&gt;
## Hannan-Quinn -5.8627&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                    0.03483  0.8519&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][2]   0.03492  1.0000&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][5]   1.39601  0.8712&lt;br /&gt;
## d.o.f=1&lt;br /&gt;
## H0 : No serial correlation&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Squared Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                     0.2509  0.6165&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][5]    1.2795  0.7938&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][9]    1.9518  0.9107&lt;br /&gt;
## d.o.f=2&lt;br /&gt;
## &lt;br /&gt;
## Weighted ARCH LM Tests&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##             Statistic Shape Scale P-Value&lt;br /&gt;
## ARCH Lag[3]     1.295 0.500 2.000  0.2551&lt;br /&gt;
## ARCH Lag[5]     1.603 1.440 1.667  0.5656&lt;br /&gt;
## ARCH Lag[7]     1.935 2.315 1.543  0.7312&lt;br /&gt;
## &lt;br /&gt;
## Nyblom stability test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Joint Statistic:  26.6709&lt;br /&gt;
## Individual Statistics:              &lt;br /&gt;
## mu     0.42613&lt;br /&gt;
## ar1    0.06712&lt;br /&gt;
## omega  0.89209&lt;br /&gt;
## alpha1 0.55216&lt;br /&gt;
## beta1  0.15390&lt;br /&gt;
## &lt;br /&gt;
## Asymptotic Critical Values (10% 5% 1%)&lt;br /&gt;
## Joint Statistic:          1.28 1.47 1.88&lt;br /&gt;
## Individual Statistic:     0.35 0.47 0.75&lt;br /&gt;
## &lt;br /&gt;
## Sign Bias Test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                    t-value   prob sig&lt;br /&gt;
## Sign Bias           0.2134 0.8310    &lt;br /&gt;
## Negative Sign Bias  1.0137 0.3108    &lt;br /&gt;
## Positive Sign Bias  0.4427 0.6580    &lt;br /&gt;
## Joint Effect        1.6909 0.6390    &lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Adjusted Pearson Goodness-of-Fit Test:&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##   group statistic p-value(g-1)&lt;br /&gt;
## 1    20     135.6    1.285e-19&lt;br /&gt;
## 2    30     139.3    2.301e-16&lt;br /&gt;
## 3    40     161.8    6.871e-17&lt;br /&gt;
## 4    50     166.2    1.164e-14&lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Elapsed time : 0.7440431&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are familiar with GARCH models you will recognise some of the parameters. &amp;lt;code&amp;gt;ar1&amp;lt;/code&amp;gt; is the AR1 coefficient of the mean model (here very small and basically insignificant), &amp;lt;code&amp;gt;alpha1&amp;lt;/code&amp;gt; is the coefficient to the squared residuals in the GARCH equation and &amp;lt;code&amp;gt;beta1&amp;lt;/code&amp;gt; is the coefficient to the lagged variance.&lt;br /&gt;
&lt;br /&gt;
Often you will want to use model output for some further analysis. It is therefore important to understand how to extract information such as the parameter estimates, their standard errors or the residuals. The object &amp;lt;code&amp;gt;ugfit&amp;lt;/code&amp;gt; contains all the information. In that object you can find two drawers (or in technical speak slots, @fit and @model). Each of these drawers contains a range of different things. What they contain you can figure out by asking for the element names&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @model slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @model slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@model)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;modelinc&amp;amp;quot;   &amp;amp;quot;modeldesc&amp;amp;quot;  &amp;amp;quot;modeldata&amp;amp;quot;  &amp;amp;quot;pars&amp;amp;quot;       &amp;amp;quot;start.pars&amp;amp;quot;&lt;br /&gt;
##  [6] &amp;amp;quot;fixed.pars&amp;amp;quot; &amp;amp;quot;maxOrder&amp;amp;quot;   &amp;amp;quot;pos.matrix&amp;amp;quot; &amp;amp;quot;fmodel&amp;amp;quot;     &amp;amp;quot;pidx&amp;amp;quot;      &lt;br /&gt;
## [11] &amp;amp;quot;n.start&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @fit slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @fit slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@fit)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;hessian&amp;amp;quot;         &amp;amp;quot;cvar&amp;amp;quot;            &amp;amp;quot;var&amp;amp;quot;            &lt;br /&gt;
##  [4] &amp;amp;quot;sigma&amp;amp;quot;           &amp;amp;quot;condH&amp;amp;quot;           &amp;amp;quot;z&amp;amp;quot;              &lt;br /&gt;
##  [7] &amp;amp;quot;LLH&amp;amp;quot;             &amp;amp;quot;log.likelihoods&amp;amp;quot; &amp;amp;quot;residuals&amp;amp;quot;      &lt;br /&gt;
## [10] &amp;amp;quot;coef&amp;amp;quot;            &amp;amp;quot;robust.cvar&amp;amp;quot;     &amp;amp;quot;A&amp;amp;quot;              &lt;br /&gt;
## [13] &amp;amp;quot;B&amp;amp;quot;               &amp;amp;quot;scores&amp;amp;quot;          &amp;amp;quot;se.coef&amp;amp;quot;        &lt;br /&gt;
## [16] &amp;amp;quot;tval&amp;amp;quot;            &amp;amp;quot;matcoef&amp;amp;quot;         &amp;amp;quot;robust.se.coef&amp;amp;quot; &lt;br /&gt;
## [19] &amp;amp;quot;robust.tval&amp;amp;quot;     &amp;amp;quot;robust.matcoef&amp;amp;quot;  &amp;amp;quot;fitted.values&amp;amp;quot;  &lt;br /&gt;
## [22] &amp;amp;quot;convergence&amp;amp;quot;     &amp;amp;quot;kappa&amp;amp;quot;           &amp;amp;quot;persistence&amp;amp;quot;    &lt;br /&gt;
## [25] &amp;amp;quot;timer&amp;amp;quot;           &amp;amp;quot;ipars&amp;amp;quot;           &amp;amp;quot;solver&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you wanted to extract the estimated coefficients you would do that in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit@fit$coef&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            mu           ar1         omega        alpha1         beta1 &lt;br /&gt;
##  3.419000e-04 -1.346260e-02  1.516946e-05  1.111584e-01  8.095171e-01&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var &amp;amp;lt;- ugfit@fit$var   # save the estimated conditional variances&lt;br /&gt;
ug_res2 &amp;amp;lt;- (ugfit@fit$residuals)^2   # save the estimated squared residuals&amp;lt;/pre&amp;gt;&lt;br /&gt;
Let&amp;#039;s plot the squared residuals and the estimated conditional variance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(ug_res2, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_var, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:CondVar2.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Forecasting ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use an estimated model to subsequently forecast the conditional variance. The function used for this purpose is the &amp;lt;code&amp;gt;ugarchforecast&amp;lt;/code&amp;gt; function. The application is rather straightforward:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfore &amp;amp;lt;- ugarchforecast(ugfit, n.ahead = 10)&lt;br /&gt;
ugfore&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## *       GARCH Model Forecast         *&lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## Model: sGARCH&lt;br /&gt;
## Horizon: 10&lt;br /&gt;
## Roll Steps: 0&lt;br /&gt;
## Out of Sample: 0&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast [T0=2018-04-27]:&lt;br /&gt;
##         Series   Sigma&lt;br /&gt;
## T+1  0.0003685 0.01640&lt;br /&gt;
## T+2  0.0003415 0.01621&lt;br /&gt;
## T+3  0.0003419 0.01604&lt;br /&gt;
## T+4  0.0003419 0.01587&lt;br /&gt;
## T+5  0.0003419 0.01572&lt;br /&gt;
## T+6  0.0003419 0.01558&lt;br /&gt;
## T+7  0.0003419 0.01545&lt;br /&gt;
## T+8  0.0003419 0.01533&lt;br /&gt;
## T+9  0.0003419 0.01521&lt;br /&gt;
## T+10 0.0003419 0.01511&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see we have produced forecasts for the next ten days, both for the expected returns (&amp;lt;code&amp;gt;Series&amp;lt;/code&amp;gt;) and for the conditional volatility (square root of the conditional variance). Similar to the object created for model fitting, &amp;lt;code&amp;gt;ugfore&amp;lt;/code&amp;gt; contains two slots (@model and @forecast) and you can use &amp;lt;code&amp;gt;names(ugfore@forecast)&amp;lt;/code&amp;gt; to figure out under which names the elements are saved. For instance you can extract the conditional volatility forecast as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_f &amp;amp;lt;- ugfore@forecast$sigmaFor&lt;br /&gt;
plot(ug_f, type = &amp;amp;quot;l&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:ug_forecast3.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the volatility is the square root of the conditional variance.&lt;br /&gt;
&lt;br /&gt;
To put these forecasts into context let&amp;#039;s display them together with the last 50 observations used in the estimation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var_t &amp;amp;lt;- c(tail(ug_var,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_res2_t &amp;amp;lt;- c(tail(ug_res2,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_f &amp;amp;lt;- c(rep(NA,20),(ug_f)^2)&lt;br /&gt;
&lt;br /&gt;
plot(ug_res2_t, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_f, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
lines(ug_var_t, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:ug_forecast4.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see how the forecast of the conditional variance picks up from the last estimated conditional variance. In fact it decreases from there, slowly, towards the unconditional variance value.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package has a lot of additional functionality which you can explore through the documentation.&lt;br /&gt;
&lt;br /&gt;
= Multivariate GARCH models =&lt;br /&gt;
&lt;br /&gt;
Often you will want to model the volatility of a vector of assets. This can be done with the multivariate equivalent of the univariate GARCH model. Estimating multivariate GARCH models turns out to be significantly more difficult than univariate GARCH models, but fortunately procedures have been developed that deal with most of these issues.&lt;br /&gt;
&lt;br /&gt;
Here we are using the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package which has a lot of useful functionality. We are applying it to estimate a multivariate volatility model for the returns of BP, Google/Alphabet and IBM shares.&lt;br /&gt;
&lt;br /&gt;
As for the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package we first need to specify the model we want to estimate. Here we stick with a Dynamic Conditional Correlation (DCC) model (see the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for details.). When estimating DCC models one basically estimates individual GARCH-type models (which could differ for each individual asset). These are then used to standardise the individual residuals. As a second step one then has to specify the correlation dynamics of these standardised residuals. It is possible to estimate the parameters of the univariate and the correlation model in one big swoop. however, my experience with this, and other packages, is that it is beneficial to separate the two steps.&lt;br /&gt;
&lt;br /&gt;
== Model Setup ==&lt;br /&gt;
&lt;br /&gt;
Here we assume that we are using the same univariate volatility model specification for each of the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# DCC (MVN)&lt;br /&gt;
uspec.n = multispec(replicate(3, ugarchspec(mean.model = list(armaOrder = c(1,0)))))&amp;lt;/pre&amp;gt;&lt;br /&gt;
What does this command do? You will recognise that &amp;lt;code&amp;gt;ugarchspec(mean.model = list(armaOrder = c(1,0)))&amp;lt;/code&amp;gt; specifies an AR(1)-GARCH(1,1) model. By using &amp;lt;code&amp;gt;replicate(3, ugarchspec...)&amp;lt;/code&amp;gt; we replicate this model 3 times (as we have three assets, IBM, Google/Alphabet and BP).&lt;br /&gt;
&lt;br /&gt;
We now estimate these univariate GARCH models using the &amp;lt;code&amp;gt;multifit&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;multf = multifit(uspec.n, rX)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The results are saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; and you can type &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; into the command window to see the estimated parameters for these three models. But we will here proceed to specify the DCC model (I assume that you know what a DCC model is. This is not the place to elaborate on this and many textbooks or indeed the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] to this package provide details). To specify the correlation specification we use the &amp;lt;code&amp;gt;dccspec&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;spec1 = dccspec(uspec = uspec.n, dccOrder = c(1, 1), distribution = &amp;#039;mvnorm&amp;#039;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this specification we have to state how the univariate volatilities are modeled (as per &amp;lt;code&amp;gt;uspec.n&amp;lt;/code&amp;gt;) and how complex the dynamic structure of the correlation matrix is (here we are using the most standard &amp;lt;code&amp;gt;dccOrder = c(1, 1)&amp;lt;/code&amp;gt; specification).&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now we are in a position to estimate the model using the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;fit1 = dccfit(spec1, data = rX, fit.control = list(eval.se = TRUE), fit = multf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
We want to estimate the model as specified in &amp;lt;code&amp;gt;spec1&amp;lt;/code&amp;gt;, using the data in &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt;. The option &amp;lt;code&amp;gt;fit.control = list(eval.se = TRUE)&amp;lt;/code&amp;gt; ensures that the estimation procedure produces standard errors for estimated parameters. Importantly &amp;lt;code&amp;gt;fit = multf&amp;lt;/code&amp;gt; indicates that we ought to use the already estimated univariate models as they were saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt;. The way to learn how to use these functions is by a combination of looking at the functions&amp;#039;s help (&amp;lt;code&amp;gt;?dccfit&amp;lt;/code&amp;gt;) and googling.&lt;br /&gt;
&lt;br /&gt;
When you estimate a multivariate volatility model like the DCC model you are typically interested in the estimated covariance or correlation matrices. After all it is at the core of these models that you allow for time-variation in the correlation between the assets (there are also constant correlation models, but we do not discuss this here). Therefore we will now learn how we extract these.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# Get the model based time varying covariance (arrays) and correlation matrices&lt;br /&gt;
cov1 = rcov(fit1)  # extracts the covariance matrix&lt;br /&gt;
cor1 = rcor(fit1)  # extracts the correlation matrix&amp;lt;/pre&amp;gt;&lt;br /&gt;
To understand the object we have at our hands here we can have a look at the imension:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dim(cor1)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1]    3    3 2850&amp;lt;/pre&amp;gt;&lt;br /&gt;
We get three outputs which tells us that we have a three dimensional object. The firts two dimensions have 3 elements each (think a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; correlation matrix) and then there is a third dimension with 2850 elements. This tells us that &amp;lt;code&amp;gt;cor1&amp;lt;/code&amp;gt; stores 2850 (&amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt;) sorrelation matrices, one for each day of data.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s have a look at the correlation matrix for the last day, day 2853;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor1[,,dim(cor1)[3]]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            rIBM       rBP    rGOOG&lt;br /&gt;
## rIBM  1.0000000 0.2424297 0.353591&lt;br /&gt;
## rBP   0.2424297 1.0000000 0.275244&lt;br /&gt;
## rGOOG 0.3535910 0.2752440 1.000000&amp;lt;/pre&amp;gt;&lt;br /&gt;
So let&amp;#039;s say we want to plot the time-varying correlation between Google and BP, which is 0.275244 on that last day. In our matrix with returns &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt; BP is the second asset and Google the 3rd. So in any particular correlation matrix we want the element in row 2 and column 3.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor_BG &amp;amp;lt;- cor1[2,1,]   # leaving the last dimension empty implies that we want all elements&lt;br /&gt;
cor_BG &amp;amp;lt;- as.xts(cor_BG)  # imposes the xts time series format - useful for plotting&amp;lt;/pre&amp;gt;&lt;br /&gt;
And now we plot this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(cor_BG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:correlation1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you transformed &amp;lt;code&amp;gt;cor_BG&amp;lt;/code&amp;gt; to be a &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; series the &amp;lt;code&amp;gt;plot&amp;lt;/code&amp;gt; function automatically picks up the date information. As you can see there is significant variation through time with the correaltion typically varying between 0.2 and 0.5.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s plot all three correlations between the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
plot(as.xts(cor1[1,2,]),main=&amp;amp;quot;IBM and BP&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[1,3,]),main=&amp;amp;quot;IBM and Google&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[2,3,]),main=&amp;amp;quot;BP and Google&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:correlation2.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Forecasts ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use your estimated model to produce forecasts for the covariance or correlation matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dccf1 &amp;amp;lt;- dccforecast(fit1, n.ahead = 10)&lt;br /&gt;
dccf1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       DCC GARCH Forecast        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Distribution         :  mvnorm&lt;br /&gt;
## Model                :  DCC(1,1)&lt;br /&gt;
## Horizon              :  10&lt;br /&gt;
## Roll Steps           :  0&lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast: &lt;br /&gt;
## &lt;br /&gt;
## First 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2539 0.3562&lt;br /&gt;
## [2,] 0.2539 1.0000 0.2883&lt;br /&gt;
## [3,] 0.3562 0.2883 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2658 0.3587&lt;br /&gt;
## [2,] 0.2658 1.0000 0.2909&lt;br /&gt;
## [3,] 0.3587 0.2909 1.0000&lt;br /&gt;
## &lt;br /&gt;
## . . .&lt;br /&gt;
## . . .&lt;br /&gt;
## &lt;br /&gt;
## Last 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3202 0.3703&lt;br /&gt;
## [2,] 0.3202 1.0000 0.3027&lt;br /&gt;
## [3,] 0.3703 0.3027 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3250 0.3714&lt;br /&gt;
## [2,] 0.3250 1.0000 0.3037&lt;br /&gt;
## [3,] 0.3714 0.3037 1.0000&amp;lt;/pre&amp;gt;&lt;br /&gt;
The actual forecasts for the correlation can be addresse via&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;Rf &amp;amp;lt;- dccf1@mforecast$R    # use H for the covariance forecast&amp;lt;/pre&amp;gt;&lt;br /&gt;
When checking the structure of &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(Rf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## List of 1&lt;br /&gt;
##  $ : num [1:3, 1:3, 1:10] 1 0.254 0.356 0.254 1 ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
you realise that the object &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt; is a list with one element. It turns out that this one list item is then a 3 dimensional matrix/array which contains the the 10 forecasts of &amp;lt;math&amp;gt;3 \times 3&amp;lt;/math&amp;gt; correlation matrices. If we want to extract, say, the 10 forecasts for the correlation between IBM (1st asset) and BP (2nd asset), we have to do this in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;corf_IB &amp;amp;lt;- Rf[[1]][1,2,]  # Correlation forecasts between IBM and BP&lt;br /&gt;
corf_IG &amp;amp;lt;- Rf[[1]][1,3,]  # Correlation forecasts between IBM and Google&lt;br /&gt;
corf_BG &amp;amp;lt;- Rf[[1]][2,3,]  # Correlation forecasts between BP and Google&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;[ [1] ]&amp;lt;/code&amp;gt; tells R to go to the first (and here only) list item and then &amp;lt;code&amp;gt;[1,2,]&amp;lt;/code&amp;gt; instructs R to select the (1,2) element of all available correlation matrices.&lt;br /&gt;
&lt;br /&gt;
As for the univariate volatililty model let us display the forecast along with the last in-sample estimates of correlation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
c_IB &amp;amp;lt;- c(tail(cor1[1,2,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IB &amp;amp;lt;- c(rep(NA,20),corf_IB) # gets the 10 forecasts&lt;br /&gt;
plot(c_IB,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and BP&amp;amp;quot;)&lt;br /&gt;
lines(cf_IB,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_IG &amp;amp;lt;- c(tail(cor1[1,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IG &amp;amp;lt;- c(rep(NA,20),corf_IG) # gets the 10 forecasts&lt;br /&gt;
plot(c_IG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_IG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_BG &amp;amp;lt;- c(tail(cor1[2,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_BG &amp;amp;lt;- c(rep(NA,20),corf_BG) # gets the 10 forecasts&lt;br /&gt;
plot(c_BG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation BP and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_BG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:mg_forecast.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Further thoughts =&lt;br /&gt;
&lt;br /&gt;
If you are looking at using pseudo-out-of sample forecasting (i.e. pretend to forecast values that actually have already occured) you should explore the &amp;lt;code&amp;gt;out.sample&amp;lt;/code&amp;gt; option of the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package also allows you to estimate multivariate factor GARCH models and copula GARCH models (check the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for more details.&lt;br /&gt;
&lt;br /&gt;
An alternative package with a slightly different set of multivariate volatility models is the `&amp;lt;code&amp;gt;ccgarch&amp;lt;/code&amp;gt; package.&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:Mg_forecast.png&amp;diff=4251</id>
		<title>File:Mg forecast.png</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:Mg_forecast.png&amp;diff=4251"/>
				<updated>2018-05-03T23:23:32Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4250</id>
		<title>R GARCH</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4250"/>
				<updated>2018-05-03T23:23:07Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /* Forecasts */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
When you are dealing with financial time-series we often have relatively high frequency observations available. It is very common for instance to have daily observations available. In fact it is now possible to obtain hourly, minute, second or even millisecond observations. But here we will restrict ourselves to daily observations. For some assets these will be 7 days a week observations, but for others these will be work-day observations, so typically 5 days a week of observations.&lt;br /&gt;
&lt;br /&gt;
= Packages used =&lt;br /&gt;
&lt;br /&gt;
There are a number of packages that can enable us to estimate volatility models. The packages we will use are the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; for univariate GARCH models and the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; (for multivariate models) package both written by Alexios Ghalanos. We shall also use the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package as it will give us some easy access to some standard financial data.&lt;br /&gt;
&lt;br /&gt;
So please ensure that you install these packes and then load them,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;#install.packages(c(&amp;amp;quot;quantmod&amp;amp;quot;,&amp;amp;quot;rugarch&amp;amp;quot;,&amp;amp;quot;rmgarch&amp;amp;quot;))   # only needed in case you have not yet installed these packages&lt;br /&gt;
library(quantmod)&lt;br /&gt;
library(rugarch)&lt;br /&gt;
library(rmgarch)&amp;lt;/pre&amp;gt;&lt;br /&gt;
Next we set our working directory&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# replace with your directory and uncomment&lt;br /&gt;
# setwd(&amp;amp;quot;YOUR/COPLETE/DIRECTORY/PATH&amp;amp;quot;) &amp;lt;/pre&amp;gt;&lt;br /&gt;
= Data upload =&lt;br /&gt;
&lt;br /&gt;
Here we will use a convenient data retrieval function (&amp;lt;code&amp;gt;getSymbols&amp;lt;/code&amp;gt;) delivered by the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package in order to retrieve some data. This function works, for instance, to retrieve stock data. The default source is [https://finance.yahoo.com/ Yahoo Finance]. If you want to find out what stock has which symbol you should be able to search the internet to find a list of ticker symbols. The following shows how to use the function. But note that my experience is that sometimes the connection does not work and you may get an error message. In that case just retry a few seconds later and it may well work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;startDate = as.Date(&amp;amp;quot;2007-01-03&amp;amp;quot;) #Specify period of time we are interested in&lt;br /&gt;
endDate = as.Date(&amp;amp;quot;2018-04-30&amp;amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
getSymbols(&amp;amp;quot;IBM&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;IBM&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;GOOG&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;GOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;BP&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;BP&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
In your environment you can see that each of these commands loads an object with the respective ticker symbol name. Let&amp;#039;s have a look at one of these dataframes to understand what data these are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;head(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            IBM.Open IBM.High IBM.Low IBM.Close IBM.Volume IBM.Adjusted&lt;br /&gt;
## 2007-01-03    97.18    98.40   96.26     97.27    9196800     73.41806&lt;br /&gt;
## 2007-01-04    97.25    98.79   96.88     98.31   10524500     74.20306&lt;br /&gt;
## 2007-01-05    97.60    97.95   96.91     97.42    7221300     73.53130&lt;br /&gt;
## 2007-01-08    98.50    99.50   98.35     98.90   10340000     74.64834&lt;br /&gt;
## 2007-01-09    99.08   100.33   99.07    100.07   11108200     75.53147&lt;br /&gt;
## 2007-01-10    98.50    99.05   97.93     98.89    8744800     74.64082&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## An &amp;#039;xts&amp;#039; object on 2007-01-03/2018-04-27 containing:&lt;br /&gt;
##   Data: num [1:2850, 1:6] 97.2 97.2 97.6 98.5 99.1 ...&lt;br /&gt;
##  - attr(*, &amp;amp;quot;dimnames&amp;amp;quot;)=List of 2&lt;br /&gt;
##   ..$ : NULL&lt;br /&gt;
##   ..$ : chr [1:6] &amp;amp;quot;IBM.Open&amp;amp;quot; &amp;amp;quot;IBM.High&amp;amp;quot; &amp;amp;quot;IBM.Low&amp;amp;quot; &amp;amp;quot;IBM.Close&amp;amp;quot; ...&lt;br /&gt;
##   Indexed by objects of class: [Date] TZ: UTC&lt;br /&gt;
##   xts Attributes:  &lt;br /&gt;
## List of 2&lt;br /&gt;
##  $ src    : chr &amp;amp;quot;yahoo&amp;amp;quot;&lt;br /&gt;
##  $ updated: POSIXct[1:1], format: &amp;amp;quot;2018-05-03 22:21:00&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can see that this object contains a range of daily observations (&amp;lt;code&amp;gt;Open&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;High&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Close&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Volume&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Adjusted&amp;lt;/code&amp;gt; share price). We also learn that the object is formatted as an &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; object. &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; is a type of time-series format and indeed we learn that the data range from 2007-01-03 to 2018-04-30.&lt;br /&gt;
&lt;br /&gt;
You can in fact produce a somewhat fancy looking chart with the following command (also part of the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;chartSeries(GOOG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GoogleChart1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When we are estimating volatility models we work with returns. There is a function that transforms the data to returns.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;rIBM &amp;amp;lt;- dailyReturn(IBM)&lt;br /&gt;
rBP &amp;amp;lt;- dailyReturn(BP)&lt;br /&gt;
rGOOG &amp;amp;lt;- dailyReturn(GOOG)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# We put all data into a data frame for use in the multivariate model&lt;br /&gt;
rX &amp;amp;lt;- data.frame(rIBM, rBP, rGOOG)&lt;br /&gt;
names(rX)[1] &amp;amp;lt;- &amp;amp;quot;rIBM&amp;amp;quot;&lt;br /&gt;
names(rX)[2] &amp;amp;lt;- &amp;amp;quot;rBP&amp;amp;quot;&lt;br /&gt;
names(rX)[3] &amp;amp;lt;- &amp;amp;quot;rGOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
There is also a &amp;lt;code&amp;gt;weeklyReturn&amp;lt;/code&amp;gt; function in case that is what you are interested in.&lt;br /&gt;
&lt;br /&gt;
= Univariate GARCH Model =&lt;br /&gt;
&lt;br /&gt;
Here we are using the functionality provided by the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package written by Alexios Galanos.&lt;br /&gt;
&lt;br /&gt;
== Model Specification ==&lt;br /&gt;
&lt;br /&gt;
The first thing you need to do is to ensure you know what type of GARCH model you want to estimate and then let R know about this. It is the &amp;lt;code&amp;gt;ugarchspec( )&amp;lt;/code&amp;gt; function which is used to let R know about the model type. There is in fact a default specification and the way to invoke this is as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec = ugarchspec()&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; is now a list which contains all the relevant model specifications. Let&amp;#039;s look at them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       GARCH Model Spec          *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## GARCH Model      : sGARCH(1,1)&lt;br /&gt;
## Variance Targeting   : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Mean Dynamics&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Mean Model       : ARFIMA(1,0,1)&lt;br /&gt;
## Include Mean     : TRUE &lt;br /&gt;
## GARCH-in-Mean        : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Distribution&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Distribution :  norm &lt;br /&gt;
## Includes Skew    :  FALSE &lt;br /&gt;
## Includes Shape   :  FALSE &lt;br /&gt;
## Includes Lambda  :  FALSE&amp;lt;/pre&amp;gt;&lt;br /&gt;
The key issues here are the spec for the &amp;lt;code&amp;gt;Mean Model&amp;lt;/code&amp;gt; (here an ARMA(1,1) model) and the specification for the &amp;lt;code&amp;gt;GARCH Model&amp;lt;/code&amp;gt;, here an &amp;lt;code&amp;gt;sGARCH(1,1)&amp;lt;/code&amp;gt; which is basically a GARCH(1,1). To get details on all the possible specifications and how to change them it is best to consult the [https://cran.r-project.org/web/packages/rugarch/vignettes/Introduction_to_the_rugarch_package.pdf documentation] of the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s say you want to change the mean model from an ARMA(1,1) to an ARMA(1,0), i.e. an AR(1) model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec &amp;amp;lt;- ugarchspec(mean.model=list(armaOrder=c(1,0)))&amp;lt;/pre&amp;gt;&lt;br /&gt;
You could call &amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; again to check that the model specification has actually changed.&lt;br /&gt;
&lt;br /&gt;
The following is the specification for an # an example of the EWMA Model (although we will not use it below).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ewma_spec = ugarchspec(variance.model=list(model=&amp;amp;quot;iGARCH&amp;amp;quot;, garchOrder=c(1,1)), &lt;br /&gt;
        mean.model=list(armaOrder=c(0,0), include.mean=TRUE),  &lt;br /&gt;
        distribution.model=&amp;amp;quot;norm&amp;amp;quot;, fixed.pars=list(omega=0))&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now that we have specified a model to estimate we need to find the best arameters, i.e. we need to estimate the model. This step is achieved by the &amp;lt;code&amp;gt;ugarchfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit = ugarchfit(spec = ug_spec, data = rIBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;fit&amp;lt;/code&amp;gt; is now a list that contains a range of results from the estimation. Let&amp;#039;s have a look at the results&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *          GARCH Model Fit        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## GARCH Model  : sGARCH(1,1)&lt;br /&gt;
## Mean Model   : ARFIMA(1,0,0)&lt;br /&gt;
## Distribution : norm &lt;br /&gt;
## &lt;br /&gt;
## Optimal Parameters&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##         Estimate  Std. Error   t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000220   1.55666  0.11955&lt;br /&gt;
## ar1    -0.013463    0.021425  -0.62835  0.52978&lt;br /&gt;
## omega   0.000015    0.000002   6.56888  0.00000&lt;br /&gt;
## alpha1  0.111158    0.006440  17.25930  0.00000&lt;br /&gt;
## beta1   0.809517    0.005883 137.59775  0.00000&lt;br /&gt;
## &lt;br /&gt;
## Robust Standard Errors:&lt;br /&gt;
##         Estimate  Std. Error  t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000230  1.48654 0.137136&lt;br /&gt;
## ar1    -0.013463    0.019583 -0.68748 0.491782&lt;br /&gt;
## omega   0.000015    0.000012  1.25867 0.208150&lt;br /&gt;
## alpha1  0.111158    0.054637  2.03450 0.041901&lt;br /&gt;
## beta1   0.809517    0.082783  9.77876 0.000000&lt;br /&gt;
## &lt;br /&gt;
## LogLikelihood : 8364.692 &lt;br /&gt;
## &lt;br /&gt;
## Information Criteria&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                     &lt;br /&gt;
## Akaike       -5.8665&lt;br /&gt;
## Bayes        -5.8560&lt;br /&gt;
## Shibata      -5.8665&lt;br /&gt;
## Hannan-Quinn -5.8627&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                    0.03483  0.8519&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][2]   0.03492  1.0000&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][5]   1.39601  0.8712&lt;br /&gt;
## d.o.f=1&lt;br /&gt;
## H0 : No serial correlation&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Squared Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                     0.2509  0.6165&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][5]    1.2795  0.7938&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][9]    1.9518  0.9107&lt;br /&gt;
## d.o.f=2&lt;br /&gt;
## &lt;br /&gt;
## Weighted ARCH LM Tests&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##             Statistic Shape Scale P-Value&lt;br /&gt;
## ARCH Lag[3]     1.295 0.500 2.000  0.2551&lt;br /&gt;
## ARCH Lag[5]     1.603 1.440 1.667  0.5656&lt;br /&gt;
## ARCH Lag[7]     1.935 2.315 1.543  0.7312&lt;br /&gt;
## &lt;br /&gt;
## Nyblom stability test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Joint Statistic:  26.6709&lt;br /&gt;
## Individual Statistics:              &lt;br /&gt;
## mu     0.42613&lt;br /&gt;
## ar1    0.06712&lt;br /&gt;
## omega  0.89209&lt;br /&gt;
## alpha1 0.55216&lt;br /&gt;
## beta1  0.15390&lt;br /&gt;
## &lt;br /&gt;
## Asymptotic Critical Values (10% 5% 1%)&lt;br /&gt;
## Joint Statistic:          1.28 1.47 1.88&lt;br /&gt;
## Individual Statistic:     0.35 0.47 0.75&lt;br /&gt;
## &lt;br /&gt;
## Sign Bias Test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                    t-value   prob sig&lt;br /&gt;
## Sign Bias           0.2134 0.8310    &lt;br /&gt;
## Negative Sign Bias  1.0137 0.3108    &lt;br /&gt;
## Positive Sign Bias  0.4427 0.6580    &lt;br /&gt;
## Joint Effect        1.6909 0.6390    &lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Adjusted Pearson Goodness-of-Fit Test:&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##   group statistic p-value(g-1)&lt;br /&gt;
## 1    20     135.6    1.285e-19&lt;br /&gt;
## 2    30     139.3    2.301e-16&lt;br /&gt;
## 3    40     161.8    6.871e-17&lt;br /&gt;
## 4    50     166.2    1.164e-14&lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Elapsed time : 0.7440431&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are familiar with GARCH models you will recognise some of the parameters. &amp;lt;code&amp;gt;ar1&amp;lt;/code&amp;gt; is the AR1 coefficient of the mean model (here very small and basically insignificant), &amp;lt;code&amp;gt;alpha1&amp;lt;/code&amp;gt; is the coefficient to the squared residuals in the GARCH equation and &amp;lt;code&amp;gt;beta1&amp;lt;/code&amp;gt; is the coefficient to the lagged variance.&lt;br /&gt;
&lt;br /&gt;
Often you will want to use model output for some further analysis. It is therefore important to understand how to extract information such as the parameter estimates, their standard errors or the residuals. The object &amp;lt;code&amp;gt;ugfit&amp;lt;/code&amp;gt; contains all the information. In that object you can find two drawers (or in technical speak slots, @fit and @model). Each of these drawers contains a range of different things. What they contain you can figure out by asking for the element names&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @model slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @model slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@model)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;modelinc&amp;amp;quot;   &amp;amp;quot;modeldesc&amp;amp;quot;  &amp;amp;quot;modeldata&amp;amp;quot;  &amp;amp;quot;pars&amp;amp;quot;       &amp;amp;quot;start.pars&amp;amp;quot;&lt;br /&gt;
##  [6] &amp;amp;quot;fixed.pars&amp;amp;quot; &amp;amp;quot;maxOrder&amp;amp;quot;   &amp;amp;quot;pos.matrix&amp;amp;quot; &amp;amp;quot;fmodel&amp;amp;quot;     &amp;amp;quot;pidx&amp;amp;quot;      &lt;br /&gt;
## [11] &amp;amp;quot;n.start&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @fit slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @fit slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@fit)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;hessian&amp;amp;quot;         &amp;amp;quot;cvar&amp;amp;quot;            &amp;amp;quot;var&amp;amp;quot;            &lt;br /&gt;
##  [4] &amp;amp;quot;sigma&amp;amp;quot;           &amp;amp;quot;condH&amp;amp;quot;           &amp;amp;quot;z&amp;amp;quot;              &lt;br /&gt;
##  [7] &amp;amp;quot;LLH&amp;amp;quot;             &amp;amp;quot;log.likelihoods&amp;amp;quot; &amp;amp;quot;residuals&amp;amp;quot;      &lt;br /&gt;
## [10] &amp;amp;quot;coef&amp;amp;quot;            &amp;amp;quot;robust.cvar&amp;amp;quot;     &amp;amp;quot;A&amp;amp;quot;              &lt;br /&gt;
## [13] &amp;amp;quot;B&amp;amp;quot;               &amp;amp;quot;scores&amp;amp;quot;          &amp;amp;quot;se.coef&amp;amp;quot;        &lt;br /&gt;
## [16] &amp;amp;quot;tval&amp;amp;quot;            &amp;amp;quot;matcoef&amp;amp;quot;         &amp;amp;quot;robust.se.coef&amp;amp;quot; &lt;br /&gt;
## [19] &amp;amp;quot;robust.tval&amp;amp;quot;     &amp;amp;quot;robust.matcoef&amp;amp;quot;  &amp;amp;quot;fitted.values&amp;amp;quot;  &lt;br /&gt;
## [22] &amp;amp;quot;convergence&amp;amp;quot;     &amp;amp;quot;kappa&amp;amp;quot;           &amp;amp;quot;persistence&amp;amp;quot;    &lt;br /&gt;
## [25] &amp;amp;quot;timer&amp;amp;quot;           &amp;amp;quot;ipars&amp;amp;quot;           &amp;amp;quot;solver&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you wanted to extract the estimated coefficients you would do that in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit@fit$coef&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            mu           ar1         omega        alpha1         beta1 &lt;br /&gt;
##  3.419000e-04 -1.346260e-02  1.516946e-05  1.111584e-01  8.095171e-01&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var &amp;amp;lt;- ugfit@fit$var   # save the estimated conditional variances&lt;br /&gt;
ug_res2 &amp;amp;lt;- (ugfit@fit$residuals)^2   # save the estimated squared residuals&amp;lt;/pre&amp;gt;&lt;br /&gt;
Let&amp;#039;s plot the squared residuals and the estimated conditional variance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(ug_res2, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_var, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:CondVar2.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Forecasting ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use an estimated model to subsequently forecast the conditional variance. The function used for this purpose is the &amp;lt;code&amp;gt;ugarchforecast&amp;lt;/code&amp;gt; function. The application is rather straightforward:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfore &amp;amp;lt;- ugarchforecast(ugfit, n.ahead = 10)&lt;br /&gt;
ugfore&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## *       GARCH Model Forecast         *&lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## Model: sGARCH&lt;br /&gt;
## Horizon: 10&lt;br /&gt;
## Roll Steps: 0&lt;br /&gt;
## Out of Sample: 0&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast [T0=2018-04-27]:&lt;br /&gt;
##         Series   Sigma&lt;br /&gt;
## T+1  0.0003685 0.01640&lt;br /&gt;
## T+2  0.0003415 0.01621&lt;br /&gt;
## T+3  0.0003419 0.01604&lt;br /&gt;
## T+4  0.0003419 0.01587&lt;br /&gt;
## T+5  0.0003419 0.01572&lt;br /&gt;
## T+6  0.0003419 0.01558&lt;br /&gt;
## T+7  0.0003419 0.01545&lt;br /&gt;
## T+8  0.0003419 0.01533&lt;br /&gt;
## T+9  0.0003419 0.01521&lt;br /&gt;
## T+10 0.0003419 0.01511&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see we have produced forecasts for the next ten days, both for the expected returns (&amp;lt;code&amp;gt;Series&amp;lt;/code&amp;gt;) and for the conditional volatility (square root of the conditional variance). Similar to the object created for model fitting, &amp;lt;code&amp;gt;ugfore&amp;lt;/code&amp;gt; contains two slots (@model and @forecast) and you can use &amp;lt;code&amp;gt;names(ugfore@forecast)&amp;lt;/code&amp;gt; to figure out under which names the elements are saved. For instance you can extract the conditional volatility forecast as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_f &amp;amp;lt;- ugfore@forecast$sigmaFor&lt;br /&gt;
plot(ug_f, type = &amp;amp;quot;l&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:ug_forecast3.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the volatility is the square root of the conditional variance.&lt;br /&gt;
&lt;br /&gt;
To put these forecasts into context let&amp;#039;s display them together with the last 50 observations used in the estimation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var_t &amp;amp;lt;- c(tail(ug_var,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_res2_t &amp;amp;lt;- c(tail(ug_res2,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_f &amp;amp;lt;- c(rep(NA,20),(ug_f)^2)&lt;br /&gt;
&lt;br /&gt;
plot(ug_res2_t, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_f, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
lines(ug_var_t, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:ug_forecast4.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see how the forecast of the conditional variance picks up from the last estimated conditional variance. In fact it decreases from there, slowly, towards the unconditional variance value.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package has a lot of additional functionality which you can explore through the documentation.&lt;br /&gt;
&lt;br /&gt;
= Multivariate GARCH models =&lt;br /&gt;
&lt;br /&gt;
Often you will want to model the volatility of a vector of assets. This can be done with the multivariate equivalent of the univariate GARCH model. Estimating multivariate GARCH models turns out to be significantly more difficult than univariate GARCH models, but fortunately procedures have been developed that deal with most of these issues.&lt;br /&gt;
&lt;br /&gt;
Here we are using the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package which has a lot of useful functionality. We are applying it to estimate a multivariate volatility model for the returns of BP, Google/Alphabet and IBM shares.&lt;br /&gt;
&lt;br /&gt;
As for the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package we first need to specify the model we want to estimate. Here we stick with a Dynamic Conditional Correlation (DCC) model (see the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for details.). When estimating DCC models one basically estimates individual GARCH-type models (which could differ for each individual asset). These are then used to standardise the individual residuals. As a second step one then has to specify the correlation dynamics of these standardised residuals. It is possible to estimate the parameters of the univariate and the correlation model in one big swoop. however, my experience with this, and other packages, is that it is beneficial to separate the two steps.&lt;br /&gt;
&lt;br /&gt;
== Model Setup ==&lt;br /&gt;
&lt;br /&gt;
Here we assume that we are using the same univariate volatility model specification for each of the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# DCC (MVN)&lt;br /&gt;
uspec.n = multispec(replicate(3, ugarchspec(mean.model = list(armaOrder = c(1,0)))))&amp;lt;/pre&amp;gt;&lt;br /&gt;
What does this command do? You will recognise that &amp;lt;code&amp;gt;ugarchspec(mean.model = list(armaOrder = c(1,0)))&amp;lt;/code&amp;gt; specifies an AR(1)-GARCH(1,1) model. By using &amp;lt;code&amp;gt;replicate(3, ugarchspec...)&amp;lt;/code&amp;gt; we replicate this model 3 times (as we have three assets, IBM, Google/Alphabet and BP).&lt;br /&gt;
&lt;br /&gt;
We now estimate these univariate GARCH models using the &amp;lt;code&amp;gt;multifit&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;multf = multifit(uspec.n, rX)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The results are saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; and you can type &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; into the command window to see the estimated parameters for these three models. But we will here proceed to specify the DCC model (I assume that you know what a DCC model is. This is not the place to elaborate on this and many textbooks or indeed the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] to this package provide details). To specify the correlation specification we use the &amp;lt;code&amp;gt;dccspec&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;spec1 = dccspec(uspec = uspec.n, dccOrder = c(1, 1), distribution = &amp;#039;mvnorm&amp;#039;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this specification we have to state how the univariate volatilities are modeled (as per &amp;lt;code&amp;gt;uspec.n&amp;lt;/code&amp;gt;) and how complex the dynamic structure of the correlation matrix is (here we are using the most standard &amp;lt;code&amp;gt;dccOrder = c(1, 1)&amp;lt;/code&amp;gt; specification).&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now we are in a position to estimate the model using the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;fit1 = dccfit(spec1, data = rX, fit.control = list(eval.se = TRUE), fit = multf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
We want to estimate the model as specified in &amp;lt;code&amp;gt;spec1&amp;lt;/code&amp;gt;, using the data in &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt;. The option &amp;lt;code&amp;gt;fit.control = list(eval.se = TRUE)&amp;lt;/code&amp;gt; ensures that the estimation procedure produces standard errors for estimated parameters. Importantly &amp;lt;code&amp;gt;fit = multf&amp;lt;/code&amp;gt; indicates that we ought to use the already estimated univariate models as they were saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt;. The way to learn how to use these functions is by a combination of looking at the functions&amp;#039;s help (&amp;lt;code&amp;gt;?dccfit&amp;lt;/code&amp;gt;) and googling.&lt;br /&gt;
&lt;br /&gt;
When you estimate a multivariate volatility model like the DCC model you are typically interested in the estimated covariance or correlation matrices. After all it is at the core of these models that you allow for time-variation in the correlation between the assets (there are also constant correlation models, but we do not discuss this here). Therefore we will now learn how we extract these.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# Get the model based time varying covariance (arrays) and correlation matrices&lt;br /&gt;
cov1 = rcov(fit1)  # extracts the covariance matrix&lt;br /&gt;
cor1 = rcor(fit1)  # extracts the correlation matrix&amp;lt;/pre&amp;gt;&lt;br /&gt;
To understand the object we have at our hands here we can have a look at the imension:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dim(cor1)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1]    3    3 2850&amp;lt;/pre&amp;gt;&lt;br /&gt;
We get three outputs which tells us that we have a three dimensional object. The firts two dimensions have 3 elements each (think a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; correlation matrix) and then there is a third dimension with 2850 elements. This tells us that &amp;lt;code&amp;gt;cor1&amp;lt;/code&amp;gt; stores 2850 (&amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt;) sorrelation matrices, one for each day of data.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s have a look at the correlation matrix for the last day, day 2853;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor1[,,dim(cor1)[3]]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            rIBM       rBP    rGOOG&lt;br /&gt;
## rIBM  1.0000000 0.2424297 0.353591&lt;br /&gt;
## rBP   0.2424297 1.0000000 0.275244&lt;br /&gt;
## rGOOG 0.3535910 0.2752440 1.000000&amp;lt;/pre&amp;gt;&lt;br /&gt;
So let&amp;#039;s say we want to plot the time-varying correlation between Google and BP, which is 0.275244 on that last day. In our matrix with returns &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt; BP is the second asset and Google the 3rd. So in any particular correlation matrix we want the element in row 2 and column 3.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor_BG &amp;amp;lt;- cor1[2,1,]   # leaving the last dimension empty implies that we want all elements&lt;br /&gt;
cor_BG &amp;amp;lt;- as.xts(cor_BG)  # imposes the xts time series format - useful for plotting&amp;lt;/pre&amp;gt;&lt;br /&gt;
And now we plot this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(cor_BG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:correlation1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you transformed &amp;lt;code&amp;gt;cor_BG&amp;lt;/code&amp;gt; to be a &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; series the &amp;lt;code&amp;gt;plot&amp;lt;/code&amp;gt; function automatically picks up the date information. As you can see there is significant variation through time with the correaltion typically varying between 0.2 and 0.5.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s plot all three correlations between the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
plot(as.xts(cor1[1,2,]),main=&amp;amp;quot;IBM and BP&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[1,3,]),main=&amp;amp;quot;IBM and Google&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[2,3,]),main=&amp;amp;quot;BP and Google&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:correlation2.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Forecasts ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use your estimated model to produce forecasts for the covariance or correlation matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dccf1 &amp;amp;lt;- dccforecast(fit1, n.ahead = 10)&lt;br /&gt;
dccf1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       DCC GARCH Forecast        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Distribution         :  mvnorm&lt;br /&gt;
## Model                :  DCC(1,1)&lt;br /&gt;
## Horizon              :  10&lt;br /&gt;
## Roll Steps           :  0&lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast: &lt;br /&gt;
## &lt;br /&gt;
## First 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2539 0.3562&lt;br /&gt;
## [2,] 0.2539 1.0000 0.2883&lt;br /&gt;
## [3,] 0.3562 0.2883 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2658 0.3587&lt;br /&gt;
## [2,] 0.2658 1.0000 0.2909&lt;br /&gt;
## [3,] 0.3587 0.2909 1.0000&lt;br /&gt;
## &lt;br /&gt;
## . . .&lt;br /&gt;
## . . .&lt;br /&gt;
## &lt;br /&gt;
## Last 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3202 0.3703&lt;br /&gt;
## [2,] 0.3202 1.0000 0.3027&lt;br /&gt;
## [3,] 0.3703 0.3027 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3250 0.3714&lt;br /&gt;
## [2,] 0.3250 1.0000 0.3037&lt;br /&gt;
## [3,] 0.3714 0.3037 1.0000&amp;lt;/pre&amp;gt;&lt;br /&gt;
The actual forecasts for the correlation can be addresse via&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;Rf &amp;amp;lt;- dccf1@mforecast$R    # use H for the covariance forecast&amp;lt;/pre&amp;gt;&lt;br /&gt;
When checking the structure of &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(Rf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## List of 1&lt;br /&gt;
##  $ : num [1:3, 1:3, 1:10] 1 0.254 0.356 0.254 1 ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
you realise that the object &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt; is a list with one element. It turns out that this one list item is then a 3 dimensional matrix/array which contains the the 10 forecasts of &amp;lt;math&amp;gt;3 \times 3&amp;lt;/math&amp;gt; correlation matrices. If we want to extract, say, the 10 forecasts for the correlation between IBM (1st asset) and BP (2nd asset), we have to do this in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;corf_IB &amp;amp;lt;- Rf[[1]][1,2,]  # Correlation forecasts between IBM and BP&lt;br /&gt;
corf_IG &amp;amp;lt;- Rf[[1]][1,3,]  # Correlation forecasts between IBM and Google&lt;br /&gt;
corf_BG &amp;amp;lt;- Rf[[1]][2,3,]  # Correlation forecasts between BP and Google&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;[[1]]&amp;lt;/code&amp;gt; tells R to go to the first (and here only) list item and then &amp;lt;code&amp;gt;[1,2,]&amp;lt;/code&amp;gt; instructs R to select the (1,2) element of all available correlation matrices.&lt;br /&gt;
&lt;br /&gt;
As for the univariate volatililty model let us display the forecast along with the last in-sample estimates of correlation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
c_IB &amp;amp;lt;- c(tail(cor1[1,2,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IB &amp;amp;lt;- c(rep(NA,20),corf_IB) # gets the 10 forecasts&lt;br /&gt;
plot(c_IB,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and BP&amp;amp;quot;)&lt;br /&gt;
lines(cf_IB,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_IG &amp;amp;lt;- c(tail(cor1[1,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IG &amp;amp;lt;- c(rep(NA,20),corf_IG) # gets the 10 forecasts&lt;br /&gt;
plot(c_IG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_IG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_BG &amp;amp;lt;- c(tail(cor1[2,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_BG &amp;amp;lt;- c(rep(NA,20),corf_BG) # gets the 10 forecasts&lt;br /&gt;
plot(c_BG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation BP and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_BG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:mg_forecast.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Further thoughts =&lt;br /&gt;
&lt;br /&gt;
If you are looking at using pseudo-out-of sample forecasting (i.e. pretend to forecast values that actually have already occured) you should explore the &amp;lt;code&amp;gt;out.sample&amp;lt;/code&amp;gt; option of the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package also allows you to estimate multivariate factor GARCH models and copula GARCH models (check the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for more details.&lt;br /&gt;
&lt;br /&gt;
An alternative package with a slightly different set of multivariate volatility models is the `&amp;lt;code&amp;gt;ccgarch&amp;lt;/code&amp;gt; package.&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:Correlation2.png&amp;diff=4249</id>
		<title>File:Correlation2.png</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:Correlation2.png&amp;diff=4249"/>
				<updated>2018-05-03T23:22:10Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:Correlation1.png&amp;diff=4248</id>
		<title>File:Correlation1.png</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:Correlation1.png&amp;diff=4248"/>
				<updated>2018-05-03T23:21:52Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4247</id>
		<title>R GARCH</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4247"/>
				<updated>2018-05-03T23:20:51Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /* Model Estimation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
When you are dealing with financial time-series we often have relatively high frequency observations available. It is very common for instance to have daily observations available. In fact it is now possible to obtain hourly, minute, second or even millisecond observations. But here we will restrict ourselves to daily observations. For some assets these will be 7 days a week observations, but for others these will be work-day observations, so typically 5 days a week of observations.&lt;br /&gt;
&lt;br /&gt;
= Packages used =&lt;br /&gt;
&lt;br /&gt;
There are a number of packages that can enable us to estimate volatility models. The packages we will use are the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; for univariate GARCH models and the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; (for multivariate models) package both written by Alexios Ghalanos. We shall also use the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package as it will give us some easy access to some standard financial data.&lt;br /&gt;
&lt;br /&gt;
So please ensure that you install these packes and then load them,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;#install.packages(c(&amp;amp;quot;quantmod&amp;amp;quot;,&amp;amp;quot;rugarch&amp;amp;quot;,&amp;amp;quot;rmgarch&amp;amp;quot;))   # only needed in case you have not yet installed these packages&lt;br /&gt;
library(quantmod)&lt;br /&gt;
library(rugarch)&lt;br /&gt;
library(rmgarch)&amp;lt;/pre&amp;gt;&lt;br /&gt;
Next we set our working directory&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# replace with your directory and uncomment&lt;br /&gt;
# setwd(&amp;amp;quot;YOUR/COPLETE/DIRECTORY/PATH&amp;amp;quot;) &amp;lt;/pre&amp;gt;&lt;br /&gt;
= Data upload =&lt;br /&gt;
&lt;br /&gt;
Here we will use a convenient data retrieval function (&amp;lt;code&amp;gt;getSymbols&amp;lt;/code&amp;gt;) delivered by the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package in order to retrieve some data. This function works, for instance, to retrieve stock data. The default source is [https://finance.yahoo.com/ Yahoo Finance]. If you want to find out what stock has which symbol you should be able to search the internet to find a list of ticker symbols. The following shows how to use the function. But note that my experience is that sometimes the connection does not work and you may get an error message. In that case just retry a few seconds later and it may well work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;startDate = as.Date(&amp;amp;quot;2007-01-03&amp;amp;quot;) #Specify period of time we are interested in&lt;br /&gt;
endDate = as.Date(&amp;amp;quot;2018-04-30&amp;amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
getSymbols(&amp;amp;quot;IBM&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;IBM&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;GOOG&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;GOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;BP&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;BP&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
In your environment you can see that each of these commands loads an object with the respective ticker symbol name. Let&amp;#039;s have a look at one of these dataframes to understand what data these are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;head(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            IBM.Open IBM.High IBM.Low IBM.Close IBM.Volume IBM.Adjusted&lt;br /&gt;
## 2007-01-03    97.18    98.40   96.26     97.27    9196800     73.41806&lt;br /&gt;
## 2007-01-04    97.25    98.79   96.88     98.31   10524500     74.20306&lt;br /&gt;
## 2007-01-05    97.60    97.95   96.91     97.42    7221300     73.53130&lt;br /&gt;
## 2007-01-08    98.50    99.50   98.35     98.90   10340000     74.64834&lt;br /&gt;
## 2007-01-09    99.08   100.33   99.07    100.07   11108200     75.53147&lt;br /&gt;
## 2007-01-10    98.50    99.05   97.93     98.89    8744800     74.64082&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## An &amp;#039;xts&amp;#039; object on 2007-01-03/2018-04-27 containing:&lt;br /&gt;
##   Data: num [1:2850, 1:6] 97.2 97.2 97.6 98.5 99.1 ...&lt;br /&gt;
##  - attr(*, &amp;amp;quot;dimnames&amp;amp;quot;)=List of 2&lt;br /&gt;
##   ..$ : NULL&lt;br /&gt;
##   ..$ : chr [1:6] &amp;amp;quot;IBM.Open&amp;amp;quot; &amp;amp;quot;IBM.High&amp;amp;quot; &amp;amp;quot;IBM.Low&amp;amp;quot; &amp;amp;quot;IBM.Close&amp;amp;quot; ...&lt;br /&gt;
##   Indexed by objects of class: [Date] TZ: UTC&lt;br /&gt;
##   xts Attributes:  &lt;br /&gt;
## List of 2&lt;br /&gt;
##  $ src    : chr &amp;amp;quot;yahoo&amp;amp;quot;&lt;br /&gt;
##  $ updated: POSIXct[1:1], format: &amp;amp;quot;2018-05-03 22:21:00&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can see that this object contains a range of daily observations (&amp;lt;code&amp;gt;Open&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;High&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Close&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Volume&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Adjusted&amp;lt;/code&amp;gt; share price). We also learn that the object is formatted as an &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; object. &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; is a type of time-series format and indeed we learn that the data range from 2007-01-03 to 2018-04-30.&lt;br /&gt;
&lt;br /&gt;
You can in fact produce a somewhat fancy looking chart with the following command (also part of the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;chartSeries(GOOG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GoogleChart1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When we are estimating volatility models we work with returns. There is a function that transforms the data to returns.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;rIBM &amp;amp;lt;- dailyReturn(IBM)&lt;br /&gt;
rBP &amp;amp;lt;- dailyReturn(BP)&lt;br /&gt;
rGOOG &amp;amp;lt;- dailyReturn(GOOG)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# We put all data into a data frame for use in the multivariate model&lt;br /&gt;
rX &amp;amp;lt;- data.frame(rIBM, rBP, rGOOG)&lt;br /&gt;
names(rX)[1] &amp;amp;lt;- &amp;amp;quot;rIBM&amp;amp;quot;&lt;br /&gt;
names(rX)[2] &amp;amp;lt;- &amp;amp;quot;rBP&amp;amp;quot;&lt;br /&gt;
names(rX)[3] &amp;amp;lt;- &amp;amp;quot;rGOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
There is also a &amp;lt;code&amp;gt;weeklyReturn&amp;lt;/code&amp;gt; function in case that is what you are interested in.&lt;br /&gt;
&lt;br /&gt;
= Univariate GARCH Model =&lt;br /&gt;
&lt;br /&gt;
Here we are using the functionality provided by the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package written by Alexios Galanos.&lt;br /&gt;
&lt;br /&gt;
== Model Specification ==&lt;br /&gt;
&lt;br /&gt;
The first thing you need to do is to ensure you know what type of GARCH model you want to estimate and then let R know about this. It is the &amp;lt;code&amp;gt;ugarchspec( )&amp;lt;/code&amp;gt; function which is used to let R know about the model type. There is in fact a default specification and the way to invoke this is as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec = ugarchspec()&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; is now a list which contains all the relevant model specifications. Let&amp;#039;s look at them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       GARCH Model Spec          *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## GARCH Model      : sGARCH(1,1)&lt;br /&gt;
## Variance Targeting   : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Mean Dynamics&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Mean Model       : ARFIMA(1,0,1)&lt;br /&gt;
## Include Mean     : TRUE &lt;br /&gt;
## GARCH-in-Mean        : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Distribution&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Distribution :  norm &lt;br /&gt;
## Includes Skew    :  FALSE &lt;br /&gt;
## Includes Shape   :  FALSE &lt;br /&gt;
## Includes Lambda  :  FALSE&amp;lt;/pre&amp;gt;&lt;br /&gt;
The key issues here are the spec for the &amp;lt;code&amp;gt;Mean Model&amp;lt;/code&amp;gt; (here an ARMA(1,1) model) and the specification for the &amp;lt;code&amp;gt;GARCH Model&amp;lt;/code&amp;gt;, here an &amp;lt;code&amp;gt;sGARCH(1,1)&amp;lt;/code&amp;gt; which is basically a GARCH(1,1). To get details on all the possible specifications and how to change them it is best to consult the [https://cran.r-project.org/web/packages/rugarch/vignettes/Introduction_to_the_rugarch_package.pdf documentation] of the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s say you want to change the mean model from an ARMA(1,1) to an ARMA(1,0), i.e. an AR(1) model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec &amp;amp;lt;- ugarchspec(mean.model=list(armaOrder=c(1,0)))&amp;lt;/pre&amp;gt;&lt;br /&gt;
You could call &amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; again to check that the model specification has actually changed.&lt;br /&gt;
&lt;br /&gt;
The following is the specification for an # an example of the EWMA Model (although we will not use it below).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ewma_spec = ugarchspec(variance.model=list(model=&amp;amp;quot;iGARCH&amp;amp;quot;, garchOrder=c(1,1)), &lt;br /&gt;
        mean.model=list(armaOrder=c(0,0), include.mean=TRUE),  &lt;br /&gt;
        distribution.model=&amp;amp;quot;norm&amp;amp;quot;, fixed.pars=list(omega=0))&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now that we have specified a model to estimate we need to find the best arameters, i.e. we need to estimate the model. This step is achieved by the &amp;lt;code&amp;gt;ugarchfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit = ugarchfit(spec = ug_spec, data = rIBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;fit&amp;lt;/code&amp;gt; is now a list that contains a range of results from the estimation. Let&amp;#039;s have a look at the results&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *          GARCH Model Fit        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## GARCH Model  : sGARCH(1,1)&lt;br /&gt;
## Mean Model   : ARFIMA(1,0,0)&lt;br /&gt;
## Distribution : norm &lt;br /&gt;
## &lt;br /&gt;
## Optimal Parameters&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##         Estimate  Std. Error   t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000220   1.55666  0.11955&lt;br /&gt;
## ar1    -0.013463    0.021425  -0.62835  0.52978&lt;br /&gt;
## omega   0.000015    0.000002   6.56888  0.00000&lt;br /&gt;
## alpha1  0.111158    0.006440  17.25930  0.00000&lt;br /&gt;
## beta1   0.809517    0.005883 137.59775  0.00000&lt;br /&gt;
## &lt;br /&gt;
## Robust Standard Errors:&lt;br /&gt;
##         Estimate  Std. Error  t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000230  1.48654 0.137136&lt;br /&gt;
## ar1    -0.013463    0.019583 -0.68748 0.491782&lt;br /&gt;
## omega   0.000015    0.000012  1.25867 0.208150&lt;br /&gt;
## alpha1  0.111158    0.054637  2.03450 0.041901&lt;br /&gt;
## beta1   0.809517    0.082783  9.77876 0.000000&lt;br /&gt;
## &lt;br /&gt;
## LogLikelihood : 8364.692 &lt;br /&gt;
## &lt;br /&gt;
## Information Criteria&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                     &lt;br /&gt;
## Akaike       -5.8665&lt;br /&gt;
## Bayes        -5.8560&lt;br /&gt;
## Shibata      -5.8665&lt;br /&gt;
## Hannan-Quinn -5.8627&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                    0.03483  0.8519&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][2]   0.03492  1.0000&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][5]   1.39601  0.8712&lt;br /&gt;
## d.o.f=1&lt;br /&gt;
## H0 : No serial correlation&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Squared Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                     0.2509  0.6165&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][5]    1.2795  0.7938&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][9]    1.9518  0.9107&lt;br /&gt;
## d.o.f=2&lt;br /&gt;
## &lt;br /&gt;
## Weighted ARCH LM Tests&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##             Statistic Shape Scale P-Value&lt;br /&gt;
## ARCH Lag[3]     1.295 0.500 2.000  0.2551&lt;br /&gt;
## ARCH Lag[5]     1.603 1.440 1.667  0.5656&lt;br /&gt;
## ARCH Lag[7]     1.935 2.315 1.543  0.7312&lt;br /&gt;
## &lt;br /&gt;
## Nyblom stability test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Joint Statistic:  26.6709&lt;br /&gt;
## Individual Statistics:              &lt;br /&gt;
## mu     0.42613&lt;br /&gt;
## ar1    0.06712&lt;br /&gt;
## omega  0.89209&lt;br /&gt;
## alpha1 0.55216&lt;br /&gt;
## beta1  0.15390&lt;br /&gt;
## &lt;br /&gt;
## Asymptotic Critical Values (10% 5% 1%)&lt;br /&gt;
## Joint Statistic:          1.28 1.47 1.88&lt;br /&gt;
## Individual Statistic:     0.35 0.47 0.75&lt;br /&gt;
## &lt;br /&gt;
## Sign Bias Test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                    t-value   prob sig&lt;br /&gt;
## Sign Bias           0.2134 0.8310    &lt;br /&gt;
## Negative Sign Bias  1.0137 0.3108    &lt;br /&gt;
## Positive Sign Bias  0.4427 0.6580    &lt;br /&gt;
## Joint Effect        1.6909 0.6390    &lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Adjusted Pearson Goodness-of-Fit Test:&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##   group statistic p-value(g-1)&lt;br /&gt;
## 1    20     135.6    1.285e-19&lt;br /&gt;
## 2    30     139.3    2.301e-16&lt;br /&gt;
## 3    40     161.8    6.871e-17&lt;br /&gt;
## 4    50     166.2    1.164e-14&lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Elapsed time : 0.7440431&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are familiar with GARCH models you will recognise some of the parameters. &amp;lt;code&amp;gt;ar1&amp;lt;/code&amp;gt; is the AR1 coefficient of the mean model (here very small and basically insignificant), &amp;lt;code&amp;gt;alpha1&amp;lt;/code&amp;gt; is the coefficient to the squared residuals in the GARCH equation and &amp;lt;code&amp;gt;beta1&amp;lt;/code&amp;gt; is the coefficient to the lagged variance.&lt;br /&gt;
&lt;br /&gt;
Often you will want to use model output for some further analysis. It is therefore important to understand how to extract information such as the parameter estimates, their standard errors or the residuals. The object &amp;lt;code&amp;gt;ugfit&amp;lt;/code&amp;gt; contains all the information. In that object you can find two drawers (or in technical speak slots, @fit and @model). Each of these drawers contains a range of different things. What they contain you can figure out by asking for the element names&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @model slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @model slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@model)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;modelinc&amp;amp;quot;   &amp;amp;quot;modeldesc&amp;amp;quot;  &amp;amp;quot;modeldata&amp;amp;quot;  &amp;amp;quot;pars&amp;amp;quot;       &amp;amp;quot;start.pars&amp;amp;quot;&lt;br /&gt;
##  [6] &amp;amp;quot;fixed.pars&amp;amp;quot; &amp;amp;quot;maxOrder&amp;amp;quot;   &amp;amp;quot;pos.matrix&amp;amp;quot; &amp;amp;quot;fmodel&amp;amp;quot;     &amp;amp;quot;pidx&amp;amp;quot;      &lt;br /&gt;
## [11] &amp;amp;quot;n.start&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @fit slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @fit slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@fit)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;hessian&amp;amp;quot;         &amp;amp;quot;cvar&amp;amp;quot;            &amp;amp;quot;var&amp;amp;quot;            &lt;br /&gt;
##  [4] &amp;amp;quot;sigma&amp;amp;quot;           &amp;amp;quot;condH&amp;amp;quot;           &amp;amp;quot;z&amp;amp;quot;              &lt;br /&gt;
##  [7] &amp;amp;quot;LLH&amp;amp;quot;             &amp;amp;quot;log.likelihoods&amp;amp;quot; &amp;amp;quot;residuals&amp;amp;quot;      &lt;br /&gt;
## [10] &amp;amp;quot;coef&amp;amp;quot;            &amp;amp;quot;robust.cvar&amp;amp;quot;     &amp;amp;quot;A&amp;amp;quot;              &lt;br /&gt;
## [13] &amp;amp;quot;B&amp;amp;quot;               &amp;amp;quot;scores&amp;amp;quot;          &amp;amp;quot;se.coef&amp;amp;quot;        &lt;br /&gt;
## [16] &amp;amp;quot;tval&amp;amp;quot;            &amp;amp;quot;matcoef&amp;amp;quot;         &amp;amp;quot;robust.se.coef&amp;amp;quot; &lt;br /&gt;
## [19] &amp;amp;quot;robust.tval&amp;amp;quot;     &amp;amp;quot;robust.matcoef&amp;amp;quot;  &amp;amp;quot;fitted.values&amp;amp;quot;  &lt;br /&gt;
## [22] &amp;amp;quot;convergence&amp;amp;quot;     &amp;amp;quot;kappa&amp;amp;quot;           &amp;amp;quot;persistence&amp;amp;quot;    &lt;br /&gt;
## [25] &amp;amp;quot;timer&amp;amp;quot;           &amp;amp;quot;ipars&amp;amp;quot;           &amp;amp;quot;solver&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you wanted to extract the estimated coefficients you would do that in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit@fit$coef&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            mu           ar1         omega        alpha1         beta1 &lt;br /&gt;
##  3.419000e-04 -1.346260e-02  1.516946e-05  1.111584e-01  8.095171e-01&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var &amp;amp;lt;- ugfit@fit$var   # save the estimated conditional variances&lt;br /&gt;
ug_res2 &amp;amp;lt;- (ugfit@fit$residuals)^2   # save the estimated squared residuals&amp;lt;/pre&amp;gt;&lt;br /&gt;
Let&amp;#039;s plot the squared residuals and the estimated conditional variance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(ug_res2, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_var, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:CondVar2.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Forecasting ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use an estimated model to subsequently forecast the conditional variance. The function used for this purpose is the &amp;lt;code&amp;gt;ugarchforecast&amp;lt;/code&amp;gt; function. The application is rather straightforward:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfore &amp;amp;lt;- ugarchforecast(ugfit, n.ahead = 10)&lt;br /&gt;
ugfore&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## *       GARCH Model Forecast         *&lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## Model: sGARCH&lt;br /&gt;
## Horizon: 10&lt;br /&gt;
## Roll Steps: 0&lt;br /&gt;
## Out of Sample: 0&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast [T0=2018-04-27]:&lt;br /&gt;
##         Series   Sigma&lt;br /&gt;
## T+1  0.0003685 0.01640&lt;br /&gt;
## T+2  0.0003415 0.01621&lt;br /&gt;
## T+3  0.0003419 0.01604&lt;br /&gt;
## T+4  0.0003419 0.01587&lt;br /&gt;
## T+5  0.0003419 0.01572&lt;br /&gt;
## T+6  0.0003419 0.01558&lt;br /&gt;
## T+7  0.0003419 0.01545&lt;br /&gt;
## T+8  0.0003419 0.01533&lt;br /&gt;
## T+9  0.0003419 0.01521&lt;br /&gt;
## T+10 0.0003419 0.01511&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see we have produced forecasts for the next ten days, both for the expected returns (&amp;lt;code&amp;gt;Series&amp;lt;/code&amp;gt;) and for the conditional volatility (square root of the conditional variance). Similar to the object created for model fitting, &amp;lt;code&amp;gt;ugfore&amp;lt;/code&amp;gt; contains two slots (@model and @forecast) and you can use &amp;lt;code&amp;gt;names(ugfore@forecast)&amp;lt;/code&amp;gt; to figure out under which names the elements are saved. For instance you can extract the conditional volatility forecast as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_f &amp;amp;lt;- ugfore@forecast$sigmaFor&lt;br /&gt;
plot(ug_f, type = &amp;amp;quot;l&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:ug_forecast3.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the volatility is the square root of the conditional variance.&lt;br /&gt;
&lt;br /&gt;
To put these forecasts into context let&amp;#039;s display them together with the last 50 observations used in the estimation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var_t &amp;amp;lt;- c(tail(ug_var,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_res2_t &amp;amp;lt;- c(tail(ug_res2,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_f &amp;amp;lt;- c(rep(NA,20),(ug_f)^2)&lt;br /&gt;
&lt;br /&gt;
plot(ug_res2_t, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_f, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
lines(ug_var_t, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:ug_forecast4.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see how the forecast of the conditional variance picks up from the last estimated conditional variance. In fact it decreases from there, slowly, towards the unconditional variance value.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package has a lot of additional functionality which you can explore through the documentation.&lt;br /&gt;
&lt;br /&gt;
= Multivariate GARCH models =&lt;br /&gt;
&lt;br /&gt;
Often you will want to model the volatility of a vector of assets. This can be done with the multivariate equivalent of the univariate GARCH model. Estimating multivariate GARCH models turns out to be significantly more difficult than univariate GARCH models, but fortunately procedures have been developed that deal with most of these issues.&lt;br /&gt;
&lt;br /&gt;
Here we are using the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package which has a lot of useful functionality. We are applying it to estimate a multivariate volatility model for the returns of BP, Google/Alphabet and IBM shares.&lt;br /&gt;
&lt;br /&gt;
As for the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package we first need to specify the model we want to estimate. Here we stick with a Dynamic Conditional Correlation (DCC) model (see the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for details.). When estimating DCC models one basically estimates individual GARCH-type models (which could differ for each individual asset). These are then used to standardise the individual residuals. As a second step one then has to specify the correlation dynamics of these standardised residuals. It is possible to estimate the parameters of the univariate and the correlation model in one big swoop. however, my experience with this, and other packages, is that it is beneficial to separate the two steps.&lt;br /&gt;
&lt;br /&gt;
== Model Setup ==&lt;br /&gt;
&lt;br /&gt;
Here we assume that we are using the same univariate volatility model specification for each of the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# DCC (MVN)&lt;br /&gt;
uspec.n = multispec(replicate(3, ugarchspec(mean.model = list(armaOrder = c(1,0)))))&amp;lt;/pre&amp;gt;&lt;br /&gt;
What does this command do? You will recognise that &amp;lt;code&amp;gt;ugarchspec(mean.model = list(armaOrder = c(1,0)))&amp;lt;/code&amp;gt; specifies an AR(1)-GARCH(1,1) model. By using &amp;lt;code&amp;gt;replicate(3, ugarchspec...)&amp;lt;/code&amp;gt; we replicate this model 3 times (as we have three assets, IBM, Google/Alphabet and BP).&lt;br /&gt;
&lt;br /&gt;
We now estimate these univariate GARCH models using the &amp;lt;code&amp;gt;multifit&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;multf = multifit(uspec.n, rX)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The results are saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; and you can type &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; into the command window to see the estimated parameters for these three models. But we will here proceed to specify the DCC model (I assume that you know what a DCC model is. This is not the place to elaborate on this and many textbooks or indeed the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] to this package provide details). To specify the correlation specification we use the &amp;lt;code&amp;gt;dccspec&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;spec1 = dccspec(uspec = uspec.n, dccOrder = c(1, 1), distribution = &amp;#039;mvnorm&amp;#039;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this specification we have to state how the univariate volatilities are modeled (as per &amp;lt;code&amp;gt;uspec.n&amp;lt;/code&amp;gt;) and how complex the dynamic structure of the correlation matrix is (here we are using the most standard &amp;lt;code&amp;gt;dccOrder = c(1, 1)&amp;lt;/code&amp;gt; specification).&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now we are in a position to estimate the model using the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;fit1 = dccfit(spec1, data = rX, fit.control = list(eval.se = TRUE), fit = multf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
We want to estimate the model as specified in &amp;lt;code&amp;gt;spec1&amp;lt;/code&amp;gt;, using the data in &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt;. The option &amp;lt;code&amp;gt;fit.control = list(eval.se = TRUE)&amp;lt;/code&amp;gt; ensures that the estimation procedure produces standard errors for estimated parameters. Importantly &amp;lt;code&amp;gt;fit = multf&amp;lt;/code&amp;gt; indicates that we ought to use the already estimated univariate models as they were saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt;. The way to learn how to use these functions is by a combination of looking at the functions&amp;#039;s help (&amp;lt;code&amp;gt;?dccfit&amp;lt;/code&amp;gt;) and googling.&lt;br /&gt;
&lt;br /&gt;
When you estimate a multivariate volatility model like the DCC model you are typically interested in the estimated covariance or correlation matrices. After all it is at the core of these models that you allow for time-variation in the correlation between the assets (there are also constant correlation models, but we do not discuss this here). Therefore we will now learn how we extract these.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# Get the model based time varying covariance (arrays) and correlation matrices&lt;br /&gt;
cov1 = rcov(fit1)  # extracts the covariance matrix&lt;br /&gt;
cor1 = rcor(fit1)  # extracts the correlation matrix&amp;lt;/pre&amp;gt;&lt;br /&gt;
To understand the object we have at our hands here we can have a look at the imension:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dim(cor1)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1]    3    3 2850&amp;lt;/pre&amp;gt;&lt;br /&gt;
We get three outputs which tells us that we have a three dimensional object. The firts two dimensions have 3 elements each (think a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; correlation matrix) and then there is a third dimension with 2850 elements. This tells us that &amp;lt;code&amp;gt;cor1&amp;lt;/code&amp;gt; stores 2850 (&amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt;) sorrelation matrices, one for each day of data.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s have a look at the correlation matrix for the last day, day 2853;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor1[,,dim(cor1)[3]]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            rIBM       rBP    rGOOG&lt;br /&gt;
## rIBM  1.0000000 0.2424297 0.353591&lt;br /&gt;
## rBP   0.2424297 1.0000000 0.275244&lt;br /&gt;
## rGOOG 0.3535910 0.2752440 1.000000&amp;lt;/pre&amp;gt;&lt;br /&gt;
So let&amp;#039;s say we want to plot the time-varying correlation between Google and BP, which is 0.275244 on that last day. In our matrix with returns &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt; BP is the second asset and Google the 3rd. So in any particular correlation matrix we want the element in row 2 and column 3.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor_BG &amp;amp;lt;- cor1[2,1,]   # leaving the last dimension empty implies that we want all elements&lt;br /&gt;
cor_BG &amp;amp;lt;- as.xts(cor_BG)  # imposes the xts time series format - useful for plotting&amp;lt;/pre&amp;gt;&lt;br /&gt;
And now we plot this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(cor_BG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:correlation1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you transformed &amp;lt;code&amp;gt;cor_BG&amp;lt;/code&amp;gt; to be a &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; series the &amp;lt;code&amp;gt;plot&amp;lt;/code&amp;gt; function automatically picks up the date information. As you can see there is significant variation through time with the correaltion typically varying between 0.2 and 0.5.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s plot all three correlations between the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
plot(as.xts(cor1[1,2,]),main=&amp;amp;quot;IBM and BP&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[1,3,]),main=&amp;amp;quot;IBM and Google&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[2,3,]),main=&amp;amp;quot;BP and Google&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:correlation2.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Forecasts ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use your estimated model to produce forecasts for the covariance or correlation matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dccf1 &amp;amp;lt;- dccforecast(fit1, n.ahead = 10)&lt;br /&gt;
dccf1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       DCC GARCH Forecast        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Distribution         :  mvnorm&lt;br /&gt;
## Model                :  DCC(1,1)&lt;br /&gt;
## Horizon              :  10&lt;br /&gt;
## Roll Steps           :  0&lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast: &lt;br /&gt;
## &lt;br /&gt;
## First 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2539 0.3562&lt;br /&gt;
## [2,] 0.2539 1.0000 0.2883&lt;br /&gt;
## [3,] 0.3562 0.2883 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2658 0.3587&lt;br /&gt;
## [2,] 0.2658 1.0000 0.2909&lt;br /&gt;
## [3,] 0.3587 0.2909 1.0000&lt;br /&gt;
## &lt;br /&gt;
## . . .&lt;br /&gt;
## . . .&lt;br /&gt;
## &lt;br /&gt;
## Last 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3202 0.3703&lt;br /&gt;
## [2,] 0.3202 1.0000 0.3027&lt;br /&gt;
## [3,] 0.3703 0.3027 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3250 0.3714&lt;br /&gt;
## [2,] 0.3250 1.0000 0.3037&lt;br /&gt;
## [3,] 0.3714 0.3037 1.0000&amp;lt;/pre&amp;gt;&lt;br /&gt;
The actual forecasts for the correlation can be addresse via&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;Rf &amp;amp;lt;- dccf1@mforecast$R    # use H for the covariance forecast&amp;lt;/pre&amp;gt;&lt;br /&gt;
When checking the structure of &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(Rf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## List of 1&lt;br /&gt;
##  $ : num [1:3, 1:3, 1:10] 1 0.254 0.356 0.254 1 ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
you realise that the object &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt; is a list with one element. It turns out that this one list item is then a 3 dimensional matrix/array which contains the the 10 forecasts of &amp;lt;math&amp;gt;3 \times 3&amp;lt;/math&amp;gt; correlation matrices. If we want to extract, say, the 10 forecasts for the correlation between IBM (1st asset) and BP (2nd asset), we have to do this in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;corf_IB &amp;amp;lt;- Rf[[1]][1,2,]  # Correlation forecasts between IBM and BP&lt;br /&gt;
corf_IG &amp;amp;lt;- Rf[[1]][1,3,]  # Correlation forecasts between IBM and Google&lt;br /&gt;
corf_BG &amp;amp;lt;- Rf[[1]][2,3,]  # Correlation forecasts between BP and Google&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;[[1]]&amp;lt;/code&amp;gt; tells R to go to the first (and here only) list item and then &amp;lt;code&amp;gt;[1,2,]&amp;lt;/code&amp;gt; instructs R to select the (1,2) element of all available correlation matrices.&lt;br /&gt;
&lt;br /&gt;
As for the univariate volatililty model let us display the forecast along with the last in-sample estimates of correlation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
c_IB &amp;amp;lt;- c(tail(cor1[1,2,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IB &amp;amp;lt;- c(rep(NA,20),corf_IB) # gets the 10 forecasts&lt;br /&gt;
plot(c_IB,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and BP&amp;amp;quot;)&lt;br /&gt;
lines(cf_IB,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_IG &amp;amp;lt;- c(tail(cor1[1,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IG &amp;amp;lt;- c(rep(NA,20),corf_IG) # gets the 10 forecasts&lt;br /&gt;
plot(c_IG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_IG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_BG &amp;amp;lt;- c(tail(cor1[2,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_BG &amp;amp;lt;- c(rep(NA,20),corf_BG) # gets the 10 forecasts&lt;br /&gt;
plot(c_BG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation BP and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_BG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-34-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Further thoughts =&lt;br /&gt;
&lt;br /&gt;
If you are looking at using pseudo-out-of sample forecasting (i.e. pretend to forecast values that actually have already occured) you should explore the &amp;lt;code&amp;gt;out.sample&amp;lt;/code&amp;gt; option of the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package also allows you to estimate multivariate factor GARCH models and copula GARCH models (check the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for more details.&lt;br /&gt;
&lt;br /&gt;
An alternative package with a slightly different set of multivariate volatility models is the `&amp;lt;code&amp;gt;ccgarch&amp;lt;/code&amp;gt; package.&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:Ug_forecast4.png&amp;diff=4246</id>
		<title>File:Ug forecast4.png</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:Ug_forecast4.png&amp;diff=4246"/>
				<updated>2018-05-03T23:19:46Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4245</id>
		<title>R GARCH</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4245"/>
				<updated>2018-05-03T23:19:20Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /* Model Forecasting */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
When you are dealing with financial time-series we often have relatively high frequency observations available. It is very common for instance to have daily observations available. In fact it is now possible to obtain hourly, minute, second or even millisecond observations. But here we will restrict ourselves to daily observations. For some assets these will be 7 days a week observations, but for others these will be work-day observations, so typically 5 days a week of observations.&lt;br /&gt;
&lt;br /&gt;
= Packages used =&lt;br /&gt;
&lt;br /&gt;
There are a number of packages that can enable us to estimate volatility models. The packages we will use are the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; for univariate GARCH models and the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; (for multivariate models) package both written by Alexios Ghalanos. We shall also use the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package as it will give us some easy access to some standard financial data.&lt;br /&gt;
&lt;br /&gt;
So please ensure that you install these packes and then load them,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;#install.packages(c(&amp;amp;quot;quantmod&amp;amp;quot;,&amp;amp;quot;rugarch&amp;amp;quot;,&amp;amp;quot;rmgarch&amp;amp;quot;))   # only needed in case you have not yet installed these packages&lt;br /&gt;
library(quantmod)&lt;br /&gt;
library(rugarch)&lt;br /&gt;
library(rmgarch)&amp;lt;/pre&amp;gt;&lt;br /&gt;
Next we set our working directory&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# replace with your directory and uncomment&lt;br /&gt;
# setwd(&amp;amp;quot;YOUR/COPLETE/DIRECTORY/PATH&amp;amp;quot;) &amp;lt;/pre&amp;gt;&lt;br /&gt;
= Data upload =&lt;br /&gt;
&lt;br /&gt;
Here we will use a convenient data retrieval function (&amp;lt;code&amp;gt;getSymbols&amp;lt;/code&amp;gt;) delivered by the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package in order to retrieve some data. This function works, for instance, to retrieve stock data. The default source is [https://finance.yahoo.com/ Yahoo Finance]. If you want to find out what stock has which symbol you should be able to search the internet to find a list of ticker symbols. The following shows how to use the function. But note that my experience is that sometimes the connection does not work and you may get an error message. In that case just retry a few seconds later and it may well work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;startDate = as.Date(&amp;amp;quot;2007-01-03&amp;amp;quot;) #Specify period of time we are interested in&lt;br /&gt;
endDate = as.Date(&amp;amp;quot;2018-04-30&amp;amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
getSymbols(&amp;amp;quot;IBM&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;IBM&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;GOOG&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;GOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;BP&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;BP&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
In your environment you can see that each of these commands loads an object with the respective ticker symbol name. Let&amp;#039;s have a look at one of these dataframes to understand what data these are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;head(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            IBM.Open IBM.High IBM.Low IBM.Close IBM.Volume IBM.Adjusted&lt;br /&gt;
## 2007-01-03    97.18    98.40   96.26     97.27    9196800     73.41806&lt;br /&gt;
## 2007-01-04    97.25    98.79   96.88     98.31   10524500     74.20306&lt;br /&gt;
## 2007-01-05    97.60    97.95   96.91     97.42    7221300     73.53130&lt;br /&gt;
## 2007-01-08    98.50    99.50   98.35     98.90   10340000     74.64834&lt;br /&gt;
## 2007-01-09    99.08   100.33   99.07    100.07   11108200     75.53147&lt;br /&gt;
## 2007-01-10    98.50    99.05   97.93     98.89    8744800     74.64082&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## An &amp;#039;xts&amp;#039; object on 2007-01-03/2018-04-27 containing:&lt;br /&gt;
##   Data: num [1:2850, 1:6] 97.2 97.2 97.6 98.5 99.1 ...&lt;br /&gt;
##  - attr(*, &amp;amp;quot;dimnames&amp;amp;quot;)=List of 2&lt;br /&gt;
##   ..$ : NULL&lt;br /&gt;
##   ..$ : chr [1:6] &amp;amp;quot;IBM.Open&amp;amp;quot; &amp;amp;quot;IBM.High&amp;amp;quot; &amp;amp;quot;IBM.Low&amp;amp;quot; &amp;amp;quot;IBM.Close&amp;amp;quot; ...&lt;br /&gt;
##   Indexed by objects of class: [Date] TZ: UTC&lt;br /&gt;
##   xts Attributes:  &lt;br /&gt;
## List of 2&lt;br /&gt;
##  $ src    : chr &amp;amp;quot;yahoo&amp;amp;quot;&lt;br /&gt;
##  $ updated: POSIXct[1:1], format: &amp;amp;quot;2018-05-03 22:21:00&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can see that this object contains a range of daily observations (&amp;lt;code&amp;gt;Open&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;High&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Close&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Volume&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Adjusted&amp;lt;/code&amp;gt; share price). We also learn that the object is formatted as an &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; object. &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; is a type of time-series format and indeed we learn that the data range from 2007-01-03 to 2018-04-30.&lt;br /&gt;
&lt;br /&gt;
You can in fact produce a somewhat fancy looking chart with the following command (also part of the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;chartSeries(GOOG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GoogleChart1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When we are estimating volatility models we work with returns. There is a function that transforms the data to returns.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;rIBM &amp;amp;lt;- dailyReturn(IBM)&lt;br /&gt;
rBP &amp;amp;lt;- dailyReturn(BP)&lt;br /&gt;
rGOOG &amp;amp;lt;- dailyReturn(GOOG)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# We put all data into a data frame for use in the multivariate model&lt;br /&gt;
rX &amp;amp;lt;- data.frame(rIBM, rBP, rGOOG)&lt;br /&gt;
names(rX)[1] &amp;amp;lt;- &amp;amp;quot;rIBM&amp;amp;quot;&lt;br /&gt;
names(rX)[2] &amp;amp;lt;- &amp;amp;quot;rBP&amp;amp;quot;&lt;br /&gt;
names(rX)[3] &amp;amp;lt;- &amp;amp;quot;rGOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
There is also a &amp;lt;code&amp;gt;weeklyReturn&amp;lt;/code&amp;gt; function in case that is what you are interested in.&lt;br /&gt;
&lt;br /&gt;
= Univariate GARCH Model =&lt;br /&gt;
&lt;br /&gt;
Here we are using the functionality provided by the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package written by Alexios Galanos.&lt;br /&gt;
&lt;br /&gt;
== Model Specification ==&lt;br /&gt;
&lt;br /&gt;
The first thing you need to do is to ensure you know what type of GARCH model you want to estimate and then let R know about this. It is the &amp;lt;code&amp;gt;ugarchspec( )&amp;lt;/code&amp;gt; function which is used to let R know about the model type. There is in fact a default specification and the way to invoke this is as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec = ugarchspec()&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; is now a list which contains all the relevant model specifications. Let&amp;#039;s look at them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       GARCH Model Spec          *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## GARCH Model      : sGARCH(1,1)&lt;br /&gt;
## Variance Targeting   : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Mean Dynamics&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Mean Model       : ARFIMA(1,0,1)&lt;br /&gt;
## Include Mean     : TRUE &lt;br /&gt;
## GARCH-in-Mean        : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Distribution&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Distribution :  norm &lt;br /&gt;
## Includes Skew    :  FALSE &lt;br /&gt;
## Includes Shape   :  FALSE &lt;br /&gt;
## Includes Lambda  :  FALSE&amp;lt;/pre&amp;gt;&lt;br /&gt;
The key issues here are the spec for the &amp;lt;code&amp;gt;Mean Model&amp;lt;/code&amp;gt; (here an ARMA(1,1) model) and the specification for the &amp;lt;code&amp;gt;GARCH Model&amp;lt;/code&amp;gt;, here an &amp;lt;code&amp;gt;sGARCH(1,1)&amp;lt;/code&amp;gt; which is basically a GARCH(1,1). To get details on all the possible specifications and how to change them it is best to consult the [https://cran.r-project.org/web/packages/rugarch/vignettes/Introduction_to_the_rugarch_package.pdf documentation] of the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s say you want to change the mean model from an ARMA(1,1) to an ARMA(1,0), i.e. an AR(1) model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec &amp;amp;lt;- ugarchspec(mean.model=list(armaOrder=c(1,0)))&amp;lt;/pre&amp;gt;&lt;br /&gt;
You could call &amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; again to check that the model specification has actually changed.&lt;br /&gt;
&lt;br /&gt;
The following is the specification for an # an example of the EWMA Model (although we will not use it below).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ewma_spec = ugarchspec(variance.model=list(model=&amp;amp;quot;iGARCH&amp;amp;quot;, garchOrder=c(1,1)), &lt;br /&gt;
        mean.model=list(armaOrder=c(0,0), include.mean=TRUE),  &lt;br /&gt;
        distribution.model=&amp;amp;quot;norm&amp;amp;quot;, fixed.pars=list(omega=0))&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now that we have specified a model to estimate we need to find the best arameters, i.e. we need to estimate the model. This step is achieved by the &amp;lt;code&amp;gt;ugarchfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit = ugarchfit(spec = ug_spec, data = rIBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;fit&amp;lt;/code&amp;gt; is now a list that contains a range of results from the estimation. Let&amp;#039;s have a look at the results&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *          GARCH Model Fit        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## GARCH Model  : sGARCH(1,1)&lt;br /&gt;
## Mean Model   : ARFIMA(1,0,0)&lt;br /&gt;
## Distribution : norm &lt;br /&gt;
## &lt;br /&gt;
## Optimal Parameters&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##         Estimate  Std. Error   t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000220   1.55666  0.11955&lt;br /&gt;
## ar1    -0.013463    0.021425  -0.62835  0.52978&lt;br /&gt;
## omega   0.000015    0.000002   6.56888  0.00000&lt;br /&gt;
## alpha1  0.111158    0.006440  17.25930  0.00000&lt;br /&gt;
## beta1   0.809517    0.005883 137.59775  0.00000&lt;br /&gt;
## &lt;br /&gt;
## Robust Standard Errors:&lt;br /&gt;
##         Estimate  Std. Error  t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000230  1.48654 0.137136&lt;br /&gt;
## ar1    -0.013463    0.019583 -0.68748 0.491782&lt;br /&gt;
## omega   0.000015    0.000012  1.25867 0.208150&lt;br /&gt;
## alpha1  0.111158    0.054637  2.03450 0.041901&lt;br /&gt;
## beta1   0.809517    0.082783  9.77876 0.000000&lt;br /&gt;
## &lt;br /&gt;
## LogLikelihood : 8364.692 &lt;br /&gt;
## &lt;br /&gt;
## Information Criteria&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                     &lt;br /&gt;
## Akaike       -5.8665&lt;br /&gt;
## Bayes        -5.8560&lt;br /&gt;
## Shibata      -5.8665&lt;br /&gt;
## Hannan-Quinn -5.8627&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                    0.03483  0.8519&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][2]   0.03492  1.0000&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][5]   1.39601  0.8712&lt;br /&gt;
## d.o.f=1&lt;br /&gt;
## H0 : No serial correlation&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Squared Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                     0.2509  0.6165&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][5]    1.2795  0.7938&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][9]    1.9518  0.9107&lt;br /&gt;
## d.o.f=2&lt;br /&gt;
## &lt;br /&gt;
## Weighted ARCH LM Tests&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##             Statistic Shape Scale P-Value&lt;br /&gt;
## ARCH Lag[3]     1.295 0.500 2.000  0.2551&lt;br /&gt;
## ARCH Lag[5]     1.603 1.440 1.667  0.5656&lt;br /&gt;
## ARCH Lag[7]     1.935 2.315 1.543  0.7312&lt;br /&gt;
## &lt;br /&gt;
## Nyblom stability test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Joint Statistic:  26.6709&lt;br /&gt;
## Individual Statistics:              &lt;br /&gt;
## mu     0.42613&lt;br /&gt;
## ar1    0.06712&lt;br /&gt;
## omega  0.89209&lt;br /&gt;
## alpha1 0.55216&lt;br /&gt;
## beta1  0.15390&lt;br /&gt;
## &lt;br /&gt;
## Asymptotic Critical Values (10% 5% 1%)&lt;br /&gt;
## Joint Statistic:          1.28 1.47 1.88&lt;br /&gt;
## Individual Statistic:     0.35 0.47 0.75&lt;br /&gt;
## &lt;br /&gt;
## Sign Bias Test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                    t-value   prob sig&lt;br /&gt;
## Sign Bias           0.2134 0.8310    &lt;br /&gt;
## Negative Sign Bias  1.0137 0.3108    &lt;br /&gt;
## Positive Sign Bias  0.4427 0.6580    &lt;br /&gt;
## Joint Effect        1.6909 0.6390    &lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Adjusted Pearson Goodness-of-Fit Test:&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##   group statistic p-value(g-1)&lt;br /&gt;
## 1    20     135.6    1.285e-19&lt;br /&gt;
## 2    30     139.3    2.301e-16&lt;br /&gt;
## 3    40     161.8    6.871e-17&lt;br /&gt;
## 4    50     166.2    1.164e-14&lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Elapsed time : 0.7440431&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are familiar with GARCH models you will recognise some of the parameters. &amp;lt;code&amp;gt;ar1&amp;lt;/code&amp;gt; is the AR1 coefficient of the mean model (here very small and basically insignificant), &amp;lt;code&amp;gt;alpha1&amp;lt;/code&amp;gt; is the coefficient to the squared residuals in the GARCH equation and &amp;lt;code&amp;gt;beta1&amp;lt;/code&amp;gt; is the coefficient to the lagged variance.&lt;br /&gt;
&lt;br /&gt;
Often you will want to use model output for some further analysis. It is therefore important to understand how to extract information such as the parameter estimates, their standard errors or the residuals. The object &amp;lt;code&amp;gt;ugfit&amp;lt;/code&amp;gt; contains all the information. In that object you can find two drawers (or in technical speak slots, @fit and @model). Each of these drawers contains a range of different things. What they contain you can figure out by asking for the element names&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @model slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @model slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@model)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;modelinc&amp;amp;quot;   &amp;amp;quot;modeldesc&amp;amp;quot;  &amp;amp;quot;modeldata&amp;amp;quot;  &amp;amp;quot;pars&amp;amp;quot;       &amp;amp;quot;start.pars&amp;amp;quot;&lt;br /&gt;
##  [6] &amp;amp;quot;fixed.pars&amp;amp;quot; &amp;amp;quot;maxOrder&amp;amp;quot;   &amp;amp;quot;pos.matrix&amp;amp;quot; &amp;amp;quot;fmodel&amp;amp;quot;     &amp;amp;quot;pidx&amp;amp;quot;      &lt;br /&gt;
## [11] &amp;amp;quot;n.start&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @fit slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @fit slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@fit)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;hessian&amp;amp;quot;         &amp;amp;quot;cvar&amp;amp;quot;            &amp;amp;quot;var&amp;amp;quot;            &lt;br /&gt;
##  [4] &amp;amp;quot;sigma&amp;amp;quot;           &amp;amp;quot;condH&amp;amp;quot;           &amp;amp;quot;z&amp;amp;quot;              &lt;br /&gt;
##  [7] &amp;amp;quot;LLH&amp;amp;quot;             &amp;amp;quot;log.likelihoods&amp;amp;quot; &amp;amp;quot;residuals&amp;amp;quot;      &lt;br /&gt;
## [10] &amp;amp;quot;coef&amp;amp;quot;            &amp;amp;quot;robust.cvar&amp;amp;quot;     &amp;amp;quot;A&amp;amp;quot;              &lt;br /&gt;
## [13] &amp;amp;quot;B&amp;amp;quot;               &amp;amp;quot;scores&amp;amp;quot;          &amp;amp;quot;se.coef&amp;amp;quot;        &lt;br /&gt;
## [16] &amp;amp;quot;tval&amp;amp;quot;            &amp;amp;quot;matcoef&amp;amp;quot;         &amp;amp;quot;robust.se.coef&amp;amp;quot; &lt;br /&gt;
## [19] &amp;amp;quot;robust.tval&amp;amp;quot;     &amp;amp;quot;robust.matcoef&amp;amp;quot;  &amp;amp;quot;fitted.values&amp;amp;quot;  &lt;br /&gt;
## [22] &amp;amp;quot;convergence&amp;amp;quot;     &amp;amp;quot;kappa&amp;amp;quot;           &amp;amp;quot;persistence&amp;amp;quot;    &lt;br /&gt;
## [25] &amp;amp;quot;timer&amp;amp;quot;           &amp;amp;quot;ipars&amp;amp;quot;           &amp;amp;quot;solver&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you wanted to extract the estimated coefficients you would do that in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit@fit$coef&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            mu           ar1         omega        alpha1         beta1 &lt;br /&gt;
##  3.419000e-04 -1.346260e-02  1.516946e-05  1.111584e-01  8.095171e-01&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var &amp;amp;lt;- ugfit@fit$var   # save the estimated conditional variances&lt;br /&gt;
ug_res2 &amp;amp;lt;- (ugfit@fit$residuals)^2   # save the estimated squared residuals&amp;lt;/pre&amp;gt;&lt;br /&gt;
Let&amp;#039;s plot the squared residuals and the estimated conditional variance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(ug_res2, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_var, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:CondVar2.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Forecasting ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use an estimated model to subsequently forecast the conditional variance. The function used for this purpose is the &amp;lt;code&amp;gt;ugarchforecast&amp;lt;/code&amp;gt; function. The application is rather straightforward:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfore &amp;amp;lt;- ugarchforecast(ugfit, n.ahead = 10)&lt;br /&gt;
ugfore&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## *       GARCH Model Forecast         *&lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## Model: sGARCH&lt;br /&gt;
## Horizon: 10&lt;br /&gt;
## Roll Steps: 0&lt;br /&gt;
## Out of Sample: 0&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast [T0=2018-04-27]:&lt;br /&gt;
##         Series   Sigma&lt;br /&gt;
## T+1  0.0003685 0.01640&lt;br /&gt;
## T+2  0.0003415 0.01621&lt;br /&gt;
## T+3  0.0003419 0.01604&lt;br /&gt;
## T+4  0.0003419 0.01587&lt;br /&gt;
## T+5  0.0003419 0.01572&lt;br /&gt;
## T+6  0.0003419 0.01558&lt;br /&gt;
## T+7  0.0003419 0.01545&lt;br /&gt;
## T+8  0.0003419 0.01533&lt;br /&gt;
## T+9  0.0003419 0.01521&lt;br /&gt;
## T+10 0.0003419 0.01511&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see we have produced forecasts for the next ten days, both for the expected returns (&amp;lt;code&amp;gt;Series&amp;lt;/code&amp;gt;) and for the conditional volatility (square root of the conditional variance). Similar to the object created for model fitting, &amp;lt;code&amp;gt;ugfore&amp;lt;/code&amp;gt; contains two slots (@model and @forecast) and you can use &amp;lt;code&amp;gt;names(ugfore@forecast)&amp;lt;/code&amp;gt; to figure out under which names the elements are saved. For instance you can extract the conditional volatility forecast as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_f &amp;amp;lt;- ugfore@forecast$sigmaFor&lt;br /&gt;
plot(ug_f, type = &amp;amp;quot;l&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:ug_forecast3.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the volatility is the square root of the conditional variance.&lt;br /&gt;
&lt;br /&gt;
To put these forecasts into context let&amp;#039;s display them together with the last 50 observations used in the estimation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var_t &amp;amp;lt;- c(tail(ug_var,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_res2_t &amp;amp;lt;- c(tail(ug_res2,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_f &amp;amp;lt;- c(rep(NA,20),(ug_f)^2)&lt;br /&gt;
&lt;br /&gt;
plot(ug_res2_t, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_f, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
lines(ug_var_t, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:ug_forecast4.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see how the forecast of the conditional variance picks up from the last estimated conditional variance. In fact it decreases from there, slowly, towards the unconditional variance value.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package has a lot of additional functionality which you can explore through the documentation.&lt;br /&gt;
&lt;br /&gt;
= Multivariate GARCH models =&lt;br /&gt;
&lt;br /&gt;
Often you will want to model the volatility of a vector of assets. This can be done with the multivariate equivalent of the univariate GARCH model. Estimating multivariate GARCH models turns out to be significantly more difficult than univariate GARCH models, but fortunately procedures have been developed that deal with most of these issues.&lt;br /&gt;
&lt;br /&gt;
Here we are using the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package which has a lot of useful functionality. We are applying it to estimate a multivariate volatility model for the returns of BP, Google/Alphabet and IBM shares.&lt;br /&gt;
&lt;br /&gt;
As for the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package we first need to specify the model we want to estimate. Here we stick with a Dynamic Conditional Correlation (DCC) model (see the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for details.). When estimating DCC models one basically estimates individual GARCH-type models (which could differ for each individual asset). These are then used to standardise the individual residuals. As a second step one then has to specify the correlation dynamics of these standardised residuals. It is possible to estimate the parameters of the univariate and the correlation model in one big swoop. however, my experience with this, and other packages, is that it is beneficial to separate the two steps.&lt;br /&gt;
&lt;br /&gt;
== Model Setup ==&lt;br /&gt;
&lt;br /&gt;
Here we assume that we are using the same univariate volatility model specification for each of the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# DCC (MVN)&lt;br /&gt;
uspec.n = multispec(replicate(3, ugarchspec(mean.model = list(armaOrder = c(1,0)))))&amp;lt;/pre&amp;gt;&lt;br /&gt;
What does this command do? You will recognise that &amp;lt;code&amp;gt;ugarchspec(mean.model = list(armaOrder = c(1,0)))&amp;lt;/code&amp;gt; specifies an AR(1)-GARCH(1,1) model. By using &amp;lt;code&amp;gt;replicate(3, ugarchspec...)&amp;lt;/code&amp;gt; we replicate this model 3 times (as we have three assets, IBM, Google/Alphabet and BP).&lt;br /&gt;
&lt;br /&gt;
We now estimate these univariate GARCH models using the &amp;lt;code&amp;gt;multifit&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;multf = multifit(uspec.n, rX)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The results are saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; and you can type &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; into the command window to see the estimated parameters for these three models. But we will here proceed to specify the DCC model (I assume that you know what a DCC model is. This is not the place to elaborate on this and many textbooks or indeed the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] to this package provide details). To specify the correlation specification we use the &amp;lt;code&amp;gt;dccspec&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;spec1 = dccspec(uspec = uspec.n, dccOrder = c(1, 1), distribution = &amp;#039;mvnorm&amp;#039;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this specification we have to state how the univariate volatilities are modeled (as per &amp;lt;code&amp;gt;uspec.n&amp;lt;/code&amp;gt;) and how complex the dynamic structure of the correlation matrix is (here we are using the most standard &amp;lt;code&amp;gt;dccOrder = c(1, 1)&amp;lt;/code&amp;gt; specification).&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now we are in a position to estimate the model using the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;fit1 = dccfit(spec1, data = rX, fit.control = list(eval.se = TRUE), fit = multf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
We want to estimate the model as specified in &amp;lt;code&amp;gt;spec1&amp;lt;/code&amp;gt;, using the data in &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt;. The option &amp;lt;code&amp;gt;fit.control = list(eval.se = TRUE)&amp;lt;/code&amp;gt; ensures that the estimation procedure produces standard errors for estimated parameters. Importantly &amp;lt;code&amp;gt;fit = multf&amp;lt;/code&amp;gt; indicates that we ought to use the already estimated univariate models as they were saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt;. The way to learn how to use these functions is by a combination of looking at the functions&amp;#039;s help (&amp;lt;code&amp;gt;?dccfit&amp;lt;/code&amp;gt;) and googling.&lt;br /&gt;
&lt;br /&gt;
When you estimate a multivariate volatility model like the DCC model you are typically interested in the estimated covariance or correlation matrices. After all it is at the core of these models that you allow for time-variation in the correlation between the assets (there are also constant correlation models, but we do not discuss this here). Therefore we will now learn how we extract these.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# Get the model based time varying covariance (arrays) and correlation matrices&lt;br /&gt;
cov1 = rcov(fit1)  # extracts the covariance matrix&lt;br /&gt;
cor1 = rcor(fit1)  # extracts the correlation matrix&amp;lt;/pre&amp;gt;&lt;br /&gt;
To understand the object we have at our hands here we can have a look at the imension:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dim(cor1)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1]    3    3 2850&amp;lt;/pre&amp;gt;&lt;br /&gt;
We get three outputs which tells us that we have a three dimensional object. The firts two dimensions have 3 elements each (think a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; correlation matrix) and then there is a third dimension with 2850 elements. This tells us that &amp;lt;code&amp;gt;cor1&amp;lt;/code&amp;gt; stores 2850 (&amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt;) sorrelation matrices, one for each day of data.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s have a look at the correlation matrix for the last day, day 2853;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor1[,,dim(cor1)[3]]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            rIBM       rBP    rGOOG&lt;br /&gt;
## rIBM  1.0000000 0.2424297 0.353591&lt;br /&gt;
## rBP   0.2424297 1.0000000 0.275244&lt;br /&gt;
## rGOOG 0.3535910 0.2752440 1.000000&amp;lt;/pre&amp;gt;&lt;br /&gt;
So let&amp;#039;s say we want to plot the time-varying correlation between Google and BP, which is 0.275244 on that last day. In our matrix with returns &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt; BP is the second asset and Google the 3rd. So in any particular correlation matrix we want the element in row 2 and column 3.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor_BG &amp;amp;lt;- cor1[2,1,]   # leaving the last dimension empty implies that we want all elements&lt;br /&gt;
cor_BG &amp;amp;lt;- as.xts(cor_BG)  # imposes the xts time series format - useful for plotting&amp;lt;/pre&amp;gt;&lt;br /&gt;
And now we plot this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(cor_BG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-28-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you transformed &amp;lt;code&amp;gt;cor_BG&amp;lt;/code&amp;gt; to be a &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; series the &amp;lt;code&amp;gt;plot&amp;lt;/code&amp;gt; function automatically picks up the date information. As you can see there is significant variation through time with the correaltion typically varying between 0.2 and 0.5.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s plot all three correlations between the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
plot(as.xts(cor1[1,2,]),main=&amp;amp;quot;IBM and BP&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[1,3,]),main=&amp;amp;quot;IBM and Google&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[2,3,]),main=&amp;amp;quot;BP and Google&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-29-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Forecasts ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use your estimated model to produce forecasts for the covariance or correlation matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dccf1 &amp;amp;lt;- dccforecast(fit1, n.ahead = 10)&lt;br /&gt;
dccf1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       DCC GARCH Forecast        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Distribution         :  mvnorm&lt;br /&gt;
## Model                :  DCC(1,1)&lt;br /&gt;
## Horizon              :  10&lt;br /&gt;
## Roll Steps           :  0&lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast: &lt;br /&gt;
## &lt;br /&gt;
## First 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2539 0.3562&lt;br /&gt;
## [2,] 0.2539 1.0000 0.2883&lt;br /&gt;
## [3,] 0.3562 0.2883 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2658 0.3587&lt;br /&gt;
## [2,] 0.2658 1.0000 0.2909&lt;br /&gt;
## [3,] 0.3587 0.2909 1.0000&lt;br /&gt;
## &lt;br /&gt;
## . . .&lt;br /&gt;
## . . .&lt;br /&gt;
## &lt;br /&gt;
## Last 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3202 0.3703&lt;br /&gt;
## [2,] 0.3202 1.0000 0.3027&lt;br /&gt;
## [3,] 0.3703 0.3027 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3250 0.3714&lt;br /&gt;
## [2,] 0.3250 1.0000 0.3037&lt;br /&gt;
## [3,] 0.3714 0.3037 1.0000&amp;lt;/pre&amp;gt;&lt;br /&gt;
The actual forecasts for the correlation can be addresse via&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;Rf &amp;amp;lt;- dccf1@mforecast$R    # use H for the covariance forecast&amp;lt;/pre&amp;gt;&lt;br /&gt;
When checking the structure of &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(Rf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## List of 1&lt;br /&gt;
##  $ : num [1:3, 1:3, 1:10] 1 0.254 0.356 0.254 1 ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
you realise that the object &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt; is a list with one element. It turns out that this one list item is then a 3 dimensional matrix/array which contains the the 10 forecasts of &amp;lt;math&amp;gt;3 \times 3&amp;lt;/math&amp;gt; correlation matrices. If we want to extract, say, the 10 forecasts for the correlation between IBM (1st asset) and BP (2nd asset), we have to do this in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;corf_IB &amp;amp;lt;- Rf[[1]][1,2,]  # Correlation forecasts between IBM and BP&lt;br /&gt;
corf_IG &amp;amp;lt;- Rf[[1]][1,3,]  # Correlation forecasts between IBM and Google&lt;br /&gt;
corf_BG &amp;amp;lt;- Rf[[1]][2,3,]  # Correlation forecasts between BP and Google&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;[[1]]&amp;lt;/code&amp;gt; tells R to go to the first (and here only) list item and then &amp;lt;code&amp;gt;[1,2,]&amp;lt;/code&amp;gt; instructs R to select the (1,2) element of all available correlation matrices.&lt;br /&gt;
&lt;br /&gt;
As for the univariate volatililty model let us display the forecast along with the last in-sample estimates of correlation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
c_IB &amp;amp;lt;- c(tail(cor1[1,2,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IB &amp;amp;lt;- c(rep(NA,20),corf_IB) # gets the 10 forecasts&lt;br /&gt;
plot(c_IB,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and BP&amp;amp;quot;)&lt;br /&gt;
lines(cf_IB,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_IG &amp;amp;lt;- c(tail(cor1[1,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IG &amp;amp;lt;- c(rep(NA,20),corf_IG) # gets the 10 forecasts&lt;br /&gt;
plot(c_IG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_IG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_BG &amp;amp;lt;- c(tail(cor1[2,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_BG &amp;amp;lt;- c(rep(NA,20),corf_BG) # gets the 10 forecasts&lt;br /&gt;
plot(c_BG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation BP and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_BG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-34-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Further thoughts =&lt;br /&gt;
&lt;br /&gt;
If you are looking at using pseudo-out-of sample forecasting (i.e. pretend to forecast values that actually have already occured) you should explore the &amp;lt;code&amp;gt;out.sample&amp;lt;/code&amp;gt; option of the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package also allows you to estimate multivariate factor GARCH models and copula GARCH models (check the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for more details.&lt;br /&gt;
&lt;br /&gt;
An alternative package with a slightly different set of multivariate volatility models is the `&amp;lt;code&amp;gt;ccgarch&amp;lt;/code&amp;gt; package.&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:Ug_forecast3.png&amp;diff=4244</id>
		<title>File:Ug forecast3.png</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:Ug_forecast3.png&amp;diff=4244"/>
				<updated>2018-05-03T23:18:31Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:Unnamed-chunk-18-1.png&amp;diff=4243</id>
		<title>File:Unnamed-chunk-18-1.png</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:Unnamed-chunk-18-1.png&amp;diff=4243"/>
				<updated>2018-05-03T23:17:46Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4242</id>
		<title>R GARCH</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4242"/>
				<updated>2018-05-03T23:16:50Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /* Model Estimation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
When you are dealing with financial time-series we often have relatively high frequency observations available. It is very common for instance to have daily observations available. In fact it is now possible to obtain hourly, minute, second or even millisecond observations. But here we will restrict ourselves to daily observations. For some assets these will be 7 days a week observations, but for others these will be work-day observations, so typically 5 days a week of observations.&lt;br /&gt;
&lt;br /&gt;
= Packages used =&lt;br /&gt;
&lt;br /&gt;
There are a number of packages that can enable us to estimate volatility models. The packages we will use are the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; for univariate GARCH models and the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; (for multivariate models) package both written by Alexios Ghalanos. We shall also use the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package as it will give us some easy access to some standard financial data.&lt;br /&gt;
&lt;br /&gt;
So please ensure that you install these packes and then load them,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;#install.packages(c(&amp;amp;quot;quantmod&amp;amp;quot;,&amp;amp;quot;rugarch&amp;amp;quot;,&amp;amp;quot;rmgarch&amp;amp;quot;))   # only needed in case you have not yet installed these packages&lt;br /&gt;
library(quantmod)&lt;br /&gt;
library(rugarch)&lt;br /&gt;
library(rmgarch)&amp;lt;/pre&amp;gt;&lt;br /&gt;
Next we set our working directory&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# replace with your directory and uncomment&lt;br /&gt;
# setwd(&amp;amp;quot;YOUR/COPLETE/DIRECTORY/PATH&amp;amp;quot;) &amp;lt;/pre&amp;gt;&lt;br /&gt;
= Data upload =&lt;br /&gt;
&lt;br /&gt;
Here we will use a convenient data retrieval function (&amp;lt;code&amp;gt;getSymbols&amp;lt;/code&amp;gt;) delivered by the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package in order to retrieve some data. This function works, for instance, to retrieve stock data. The default source is [https://finance.yahoo.com/ Yahoo Finance]. If you want to find out what stock has which symbol you should be able to search the internet to find a list of ticker symbols. The following shows how to use the function. But note that my experience is that sometimes the connection does not work and you may get an error message. In that case just retry a few seconds later and it may well work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;startDate = as.Date(&amp;amp;quot;2007-01-03&amp;amp;quot;) #Specify period of time we are interested in&lt;br /&gt;
endDate = as.Date(&amp;amp;quot;2018-04-30&amp;amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
getSymbols(&amp;amp;quot;IBM&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;IBM&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;GOOG&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;GOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;BP&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;BP&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
In your environment you can see that each of these commands loads an object with the respective ticker symbol name. Let&amp;#039;s have a look at one of these dataframes to understand what data these are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;head(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            IBM.Open IBM.High IBM.Low IBM.Close IBM.Volume IBM.Adjusted&lt;br /&gt;
## 2007-01-03    97.18    98.40   96.26     97.27    9196800     73.41806&lt;br /&gt;
## 2007-01-04    97.25    98.79   96.88     98.31   10524500     74.20306&lt;br /&gt;
## 2007-01-05    97.60    97.95   96.91     97.42    7221300     73.53130&lt;br /&gt;
## 2007-01-08    98.50    99.50   98.35     98.90   10340000     74.64834&lt;br /&gt;
## 2007-01-09    99.08   100.33   99.07    100.07   11108200     75.53147&lt;br /&gt;
## 2007-01-10    98.50    99.05   97.93     98.89    8744800     74.64082&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## An &amp;#039;xts&amp;#039; object on 2007-01-03/2018-04-27 containing:&lt;br /&gt;
##   Data: num [1:2850, 1:6] 97.2 97.2 97.6 98.5 99.1 ...&lt;br /&gt;
##  - attr(*, &amp;amp;quot;dimnames&amp;amp;quot;)=List of 2&lt;br /&gt;
##   ..$ : NULL&lt;br /&gt;
##   ..$ : chr [1:6] &amp;amp;quot;IBM.Open&amp;amp;quot; &amp;amp;quot;IBM.High&amp;amp;quot; &amp;amp;quot;IBM.Low&amp;amp;quot; &amp;amp;quot;IBM.Close&amp;amp;quot; ...&lt;br /&gt;
##   Indexed by objects of class: [Date] TZ: UTC&lt;br /&gt;
##   xts Attributes:  &lt;br /&gt;
## List of 2&lt;br /&gt;
##  $ src    : chr &amp;amp;quot;yahoo&amp;amp;quot;&lt;br /&gt;
##  $ updated: POSIXct[1:1], format: &amp;amp;quot;2018-05-03 22:21:00&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can see that this object contains a range of daily observations (&amp;lt;code&amp;gt;Open&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;High&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Close&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Volume&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Adjusted&amp;lt;/code&amp;gt; share price). We also learn that the object is formatted as an &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; object. &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; is a type of time-series format and indeed we learn that the data range from 2007-01-03 to 2018-04-30.&lt;br /&gt;
&lt;br /&gt;
You can in fact produce a somewhat fancy looking chart with the following command (also part of the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;chartSeries(GOOG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GoogleChart1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When we are estimating volatility models we work with returns. There is a function that transforms the data to returns.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;rIBM &amp;amp;lt;- dailyReturn(IBM)&lt;br /&gt;
rBP &amp;amp;lt;- dailyReturn(BP)&lt;br /&gt;
rGOOG &amp;amp;lt;- dailyReturn(GOOG)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# We put all data into a data frame for use in the multivariate model&lt;br /&gt;
rX &amp;amp;lt;- data.frame(rIBM, rBP, rGOOG)&lt;br /&gt;
names(rX)[1] &amp;amp;lt;- &amp;amp;quot;rIBM&amp;amp;quot;&lt;br /&gt;
names(rX)[2] &amp;amp;lt;- &amp;amp;quot;rBP&amp;amp;quot;&lt;br /&gt;
names(rX)[3] &amp;amp;lt;- &amp;amp;quot;rGOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
There is also a &amp;lt;code&amp;gt;weeklyReturn&amp;lt;/code&amp;gt; function in case that is what you are interested in.&lt;br /&gt;
&lt;br /&gt;
= Univariate GARCH Model =&lt;br /&gt;
&lt;br /&gt;
Here we are using the functionality provided by the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package written by Alexios Galanos.&lt;br /&gt;
&lt;br /&gt;
== Model Specification ==&lt;br /&gt;
&lt;br /&gt;
The first thing you need to do is to ensure you know what type of GARCH model you want to estimate and then let R know about this. It is the &amp;lt;code&amp;gt;ugarchspec( )&amp;lt;/code&amp;gt; function which is used to let R know about the model type. There is in fact a default specification and the way to invoke this is as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec = ugarchspec()&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; is now a list which contains all the relevant model specifications. Let&amp;#039;s look at them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       GARCH Model Spec          *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## GARCH Model      : sGARCH(1,1)&lt;br /&gt;
## Variance Targeting   : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Mean Dynamics&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Mean Model       : ARFIMA(1,0,1)&lt;br /&gt;
## Include Mean     : TRUE &lt;br /&gt;
## GARCH-in-Mean        : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Distribution&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Distribution :  norm &lt;br /&gt;
## Includes Skew    :  FALSE &lt;br /&gt;
## Includes Shape   :  FALSE &lt;br /&gt;
## Includes Lambda  :  FALSE&amp;lt;/pre&amp;gt;&lt;br /&gt;
The key issues here are the spec for the &amp;lt;code&amp;gt;Mean Model&amp;lt;/code&amp;gt; (here an ARMA(1,1) model) and the specification for the &amp;lt;code&amp;gt;GARCH Model&amp;lt;/code&amp;gt;, here an &amp;lt;code&amp;gt;sGARCH(1,1)&amp;lt;/code&amp;gt; which is basically a GARCH(1,1). To get details on all the possible specifications and how to change them it is best to consult the [https://cran.r-project.org/web/packages/rugarch/vignettes/Introduction_to_the_rugarch_package.pdf documentation] of the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s say you want to change the mean model from an ARMA(1,1) to an ARMA(1,0), i.e. an AR(1) model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec &amp;amp;lt;- ugarchspec(mean.model=list(armaOrder=c(1,0)))&amp;lt;/pre&amp;gt;&lt;br /&gt;
You could call &amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; again to check that the model specification has actually changed.&lt;br /&gt;
&lt;br /&gt;
The following is the specification for an # an example of the EWMA Model (although we will not use it below).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ewma_spec = ugarchspec(variance.model=list(model=&amp;amp;quot;iGARCH&amp;amp;quot;, garchOrder=c(1,1)), &lt;br /&gt;
        mean.model=list(armaOrder=c(0,0), include.mean=TRUE),  &lt;br /&gt;
        distribution.model=&amp;amp;quot;norm&amp;amp;quot;, fixed.pars=list(omega=0))&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now that we have specified a model to estimate we need to find the best arameters, i.e. we need to estimate the model. This step is achieved by the &amp;lt;code&amp;gt;ugarchfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit = ugarchfit(spec = ug_spec, data = rIBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;fit&amp;lt;/code&amp;gt; is now a list that contains a range of results from the estimation. Let&amp;#039;s have a look at the results&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *          GARCH Model Fit        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## GARCH Model  : sGARCH(1,1)&lt;br /&gt;
## Mean Model   : ARFIMA(1,0,0)&lt;br /&gt;
## Distribution : norm &lt;br /&gt;
## &lt;br /&gt;
## Optimal Parameters&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##         Estimate  Std. Error   t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000220   1.55666  0.11955&lt;br /&gt;
## ar1    -0.013463    0.021425  -0.62835  0.52978&lt;br /&gt;
## omega   0.000015    0.000002   6.56888  0.00000&lt;br /&gt;
## alpha1  0.111158    0.006440  17.25930  0.00000&lt;br /&gt;
## beta1   0.809517    0.005883 137.59775  0.00000&lt;br /&gt;
## &lt;br /&gt;
## Robust Standard Errors:&lt;br /&gt;
##         Estimate  Std. Error  t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000230  1.48654 0.137136&lt;br /&gt;
## ar1    -0.013463    0.019583 -0.68748 0.491782&lt;br /&gt;
## omega   0.000015    0.000012  1.25867 0.208150&lt;br /&gt;
## alpha1  0.111158    0.054637  2.03450 0.041901&lt;br /&gt;
## beta1   0.809517    0.082783  9.77876 0.000000&lt;br /&gt;
## &lt;br /&gt;
## LogLikelihood : 8364.692 &lt;br /&gt;
## &lt;br /&gt;
## Information Criteria&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                     &lt;br /&gt;
## Akaike       -5.8665&lt;br /&gt;
## Bayes        -5.8560&lt;br /&gt;
## Shibata      -5.8665&lt;br /&gt;
## Hannan-Quinn -5.8627&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                    0.03483  0.8519&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][2]   0.03492  1.0000&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][5]   1.39601  0.8712&lt;br /&gt;
## d.o.f=1&lt;br /&gt;
## H0 : No serial correlation&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Squared Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                     0.2509  0.6165&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][5]    1.2795  0.7938&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][9]    1.9518  0.9107&lt;br /&gt;
## d.o.f=2&lt;br /&gt;
## &lt;br /&gt;
## Weighted ARCH LM Tests&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##             Statistic Shape Scale P-Value&lt;br /&gt;
## ARCH Lag[3]     1.295 0.500 2.000  0.2551&lt;br /&gt;
## ARCH Lag[5]     1.603 1.440 1.667  0.5656&lt;br /&gt;
## ARCH Lag[7]     1.935 2.315 1.543  0.7312&lt;br /&gt;
## &lt;br /&gt;
## Nyblom stability test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Joint Statistic:  26.6709&lt;br /&gt;
## Individual Statistics:              &lt;br /&gt;
## mu     0.42613&lt;br /&gt;
## ar1    0.06712&lt;br /&gt;
## omega  0.89209&lt;br /&gt;
## alpha1 0.55216&lt;br /&gt;
## beta1  0.15390&lt;br /&gt;
## &lt;br /&gt;
## Asymptotic Critical Values (10% 5% 1%)&lt;br /&gt;
## Joint Statistic:          1.28 1.47 1.88&lt;br /&gt;
## Individual Statistic:     0.35 0.47 0.75&lt;br /&gt;
## &lt;br /&gt;
## Sign Bias Test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                    t-value   prob sig&lt;br /&gt;
## Sign Bias           0.2134 0.8310    &lt;br /&gt;
## Negative Sign Bias  1.0137 0.3108    &lt;br /&gt;
## Positive Sign Bias  0.4427 0.6580    &lt;br /&gt;
## Joint Effect        1.6909 0.6390    &lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Adjusted Pearson Goodness-of-Fit Test:&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##   group statistic p-value(g-1)&lt;br /&gt;
## 1    20     135.6    1.285e-19&lt;br /&gt;
## 2    30     139.3    2.301e-16&lt;br /&gt;
## 3    40     161.8    6.871e-17&lt;br /&gt;
## 4    50     166.2    1.164e-14&lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Elapsed time : 0.7440431&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are familiar with GARCH models you will recognise some of the parameters. &amp;lt;code&amp;gt;ar1&amp;lt;/code&amp;gt; is the AR1 coefficient of the mean model (here very small and basically insignificant), &amp;lt;code&amp;gt;alpha1&amp;lt;/code&amp;gt; is the coefficient to the squared residuals in the GARCH equation and &amp;lt;code&amp;gt;beta1&amp;lt;/code&amp;gt; is the coefficient to the lagged variance.&lt;br /&gt;
&lt;br /&gt;
Often you will want to use model output for some further analysis. It is therefore important to understand how to extract information such as the parameter estimates, their standard errors or the residuals. The object &amp;lt;code&amp;gt;ugfit&amp;lt;/code&amp;gt; contains all the information. In that object you can find two drawers (or in technical speak slots, @fit and @model). Each of these drawers contains a range of different things. What they contain you can figure out by asking for the element names&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @model slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @model slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@model)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;modelinc&amp;amp;quot;   &amp;amp;quot;modeldesc&amp;amp;quot;  &amp;amp;quot;modeldata&amp;amp;quot;  &amp;amp;quot;pars&amp;amp;quot;       &amp;amp;quot;start.pars&amp;amp;quot;&lt;br /&gt;
##  [6] &amp;amp;quot;fixed.pars&amp;amp;quot; &amp;amp;quot;maxOrder&amp;amp;quot;   &amp;amp;quot;pos.matrix&amp;amp;quot; &amp;amp;quot;fmodel&amp;amp;quot;     &amp;amp;quot;pidx&amp;amp;quot;      &lt;br /&gt;
## [11] &amp;amp;quot;n.start&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @fit slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @fit slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@fit)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;hessian&amp;amp;quot;         &amp;amp;quot;cvar&amp;amp;quot;            &amp;amp;quot;var&amp;amp;quot;            &lt;br /&gt;
##  [4] &amp;amp;quot;sigma&amp;amp;quot;           &amp;amp;quot;condH&amp;amp;quot;           &amp;amp;quot;z&amp;amp;quot;              &lt;br /&gt;
##  [7] &amp;amp;quot;LLH&amp;amp;quot;             &amp;amp;quot;log.likelihoods&amp;amp;quot; &amp;amp;quot;residuals&amp;amp;quot;      &lt;br /&gt;
## [10] &amp;amp;quot;coef&amp;amp;quot;            &amp;amp;quot;robust.cvar&amp;amp;quot;     &amp;amp;quot;A&amp;amp;quot;              &lt;br /&gt;
## [13] &amp;amp;quot;B&amp;amp;quot;               &amp;amp;quot;scores&amp;amp;quot;          &amp;amp;quot;se.coef&amp;amp;quot;        &lt;br /&gt;
## [16] &amp;amp;quot;tval&amp;amp;quot;            &amp;amp;quot;matcoef&amp;amp;quot;         &amp;amp;quot;robust.se.coef&amp;amp;quot; &lt;br /&gt;
## [19] &amp;amp;quot;robust.tval&amp;amp;quot;     &amp;amp;quot;robust.matcoef&amp;amp;quot;  &amp;amp;quot;fitted.values&amp;amp;quot;  &lt;br /&gt;
## [22] &amp;amp;quot;convergence&amp;amp;quot;     &amp;amp;quot;kappa&amp;amp;quot;           &amp;amp;quot;persistence&amp;amp;quot;    &lt;br /&gt;
## [25] &amp;amp;quot;timer&amp;amp;quot;           &amp;amp;quot;ipars&amp;amp;quot;           &amp;amp;quot;solver&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you wanted to extract the estimated coefficients you would do that in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit@fit$coef&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            mu           ar1         omega        alpha1         beta1 &lt;br /&gt;
##  3.419000e-04 -1.346260e-02  1.516946e-05  1.111584e-01  8.095171e-01&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var &amp;amp;lt;- ugfit@fit$var   # save the estimated conditional variances&lt;br /&gt;
ug_res2 &amp;amp;lt;- (ugfit@fit$residuals)^2   # save the estimated squared residuals&amp;lt;/pre&amp;gt;&lt;br /&gt;
Let&amp;#039;s plot the squared residuals and the estimated conditional variance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(ug_res2, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_var, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:CondVar2.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Forecasting ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use an estimated model to subsequently forecast the conditional variance. The function used for this purpose is the &amp;lt;code&amp;gt;ugarchforecast&amp;lt;/code&amp;gt; function. The application is rather straightforward:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfore &amp;amp;lt;- ugarchforecast(ugfit, n.ahead = 10)&lt;br /&gt;
ugfore&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## *       GARCH Model Forecast         *&lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## Model: sGARCH&lt;br /&gt;
## Horizon: 10&lt;br /&gt;
## Roll Steps: 0&lt;br /&gt;
## Out of Sample: 0&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast [T0=2018-04-27]:&lt;br /&gt;
##         Series   Sigma&lt;br /&gt;
## T+1  0.0003685 0.01640&lt;br /&gt;
## T+2  0.0003415 0.01621&lt;br /&gt;
## T+3  0.0003419 0.01604&lt;br /&gt;
## T+4  0.0003419 0.01587&lt;br /&gt;
## T+5  0.0003419 0.01572&lt;br /&gt;
## T+6  0.0003419 0.01558&lt;br /&gt;
## T+7  0.0003419 0.01545&lt;br /&gt;
## T+8  0.0003419 0.01533&lt;br /&gt;
## T+9  0.0003419 0.01521&lt;br /&gt;
## T+10 0.0003419 0.01511&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see we have produced forecasts for the next ten days, both for the expected returns (&amp;lt;code&amp;gt;Series&amp;lt;/code&amp;gt;) and for the conditional volatility (square root of the conditional variance). Similar to the object created for model fitting, &amp;lt;code&amp;gt;ugfore&amp;lt;/code&amp;gt; contains two slots (@model and @forecast) and you can use &amp;lt;code&amp;gt;names(ugfore@forecast)&amp;lt;/code&amp;gt; to figure out under which names the elements are saved. For instance you can extract the conditional volatility forecast as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_f &amp;amp;lt;- ugfore@forecast$sigmaFor&lt;br /&gt;
plot(ug_f, type = &amp;amp;quot;l&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-18-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the volatility is the square root of the conditional variance.&lt;br /&gt;
&lt;br /&gt;
To put these forecasts into context let&amp;#039;s display them together with the last 50 observations used in the estimation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var_t &amp;amp;lt;- c(tail(ug_var,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_res2_t &amp;amp;lt;- c(tail(ug_res2,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_f &amp;amp;lt;- c(rep(NA,20),(ug_f)^2)&lt;br /&gt;
&lt;br /&gt;
plot(ug_res2_t, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_f, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
lines(ug_var_t, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-19-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see how the forecast of the conditional variance picks up from the last estimated conditional variance. In fact it decreases from there, slowly, towards the unconditional variance value.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package has a lot of additional functionality which you can explore through the documentation.&lt;br /&gt;
&lt;br /&gt;
= Multivariate GARCH models =&lt;br /&gt;
&lt;br /&gt;
Often you will want to model the volatility of a vector of assets. This can be done with the multivariate equivalent of the univariate GARCH model. Estimating multivariate GARCH models turns out to be significantly more difficult than univariate GARCH models, but fortunately procedures have been developed that deal with most of these issues.&lt;br /&gt;
&lt;br /&gt;
Here we are using the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package which has a lot of useful functionality. We are applying it to estimate a multivariate volatility model for the returns of BP, Google/Alphabet and IBM shares.&lt;br /&gt;
&lt;br /&gt;
As for the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package we first need to specify the model we want to estimate. Here we stick with a Dynamic Conditional Correlation (DCC) model (see the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for details.). When estimating DCC models one basically estimates individual GARCH-type models (which could differ for each individual asset). These are then used to standardise the individual residuals. As a second step one then has to specify the correlation dynamics of these standardised residuals. It is possible to estimate the parameters of the univariate and the correlation model in one big swoop. however, my experience with this, and other packages, is that it is beneficial to separate the two steps.&lt;br /&gt;
&lt;br /&gt;
== Model Setup ==&lt;br /&gt;
&lt;br /&gt;
Here we assume that we are using the same univariate volatility model specification for each of the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# DCC (MVN)&lt;br /&gt;
uspec.n = multispec(replicate(3, ugarchspec(mean.model = list(armaOrder = c(1,0)))))&amp;lt;/pre&amp;gt;&lt;br /&gt;
What does this command do? You will recognise that &amp;lt;code&amp;gt;ugarchspec(mean.model = list(armaOrder = c(1,0)))&amp;lt;/code&amp;gt; specifies an AR(1)-GARCH(1,1) model. By using &amp;lt;code&amp;gt;replicate(3, ugarchspec...)&amp;lt;/code&amp;gt; we replicate this model 3 times (as we have three assets, IBM, Google/Alphabet and BP).&lt;br /&gt;
&lt;br /&gt;
We now estimate these univariate GARCH models using the &amp;lt;code&amp;gt;multifit&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;multf = multifit(uspec.n, rX)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The results are saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; and you can type &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; into the command window to see the estimated parameters for these three models. But we will here proceed to specify the DCC model (I assume that you know what a DCC model is. This is not the place to elaborate on this and many textbooks or indeed the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] to this package provide details). To specify the correlation specification we use the &amp;lt;code&amp;gt;dccspec&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;spec1 = dccspec(uspec = uspec.n, dccOrder = c(1, 1), distribution = &amp;#039;mvnorm&amp;#039;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this specification we have to state how the univariate volatilities are modeled (as per &amp;lt;code&amp;gt;uspec.n&amp;lt;/code&amp;gt;) and how complex the dynamic structure of the correlation matrix is (here we are using the most standard &amp;lt;code&amp;gt;dccOrder = c(1, 1)&amp;lt;/code&amp;gt; specification).&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now we are in a position to estimate the model using the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;fit1 = dccfit(spec1, data = rX, fit.control = list(eval.se = TRUE), fit = multf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
We want to estimate the model as specified in &amp;lt;code&amp;gt;spec1&amp;lt;/code&amp;gt;, using the data in &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt;. The option &amp;lt;code&amp;gt;fit.control = list(eval.se = TRUE)&amp;lt;/code&amp;gt; ensures that the estimation procedure produces standard errors for estimated parameters. Importantly &amp;lt;code&amp;gt;fit = multf&amp;lt;/code&amp;gt; indicates that we ought to use the already estimated univariate models as they were saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt;. The way to learn how to use these functions is by a combination of looking at the functions&amp;#039;s help (&amp;lt;code&amp;gt;?dccfit&amp;lt;/code&amp;gt;) and googling.&lt;br /&gt;
&lt;br /&gt;
When you estimate a multivariate volatility model like the DCC model you are typically interested in the estimated covariance or correlation matrices. After all it is at the core of these models that you allow for time-variation in the correlation between the assets (there are also constant correlation models, but we do not discuss this here). Therefore we will now learn how we extract these.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# Get the model based time varying covariance (arrays) and correlation matrices&lt;br /&gt;
cov1 = rcov(fit1)  # extracts the covariance matrix&lt;br /&gt;
cor1 = rcor(fit1)  # extracts the correlation matrix&amp;lt;/pre&amp;gt;&lt;br /&gt;
To understand the object we have at our hands here we can have a look at the imension:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dim(cor1)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1]    3    3 2850&amp;lt;/pre&amp;gt;&lt;br /&gt;
We get three outputs which tells us that we have a three dimensional object. The firts two dimensions have 3 elements each (think a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; correlation matrix) and then there is a third dimension with 2850 elements. This tells us that &amp;lt;code&amp;gt;cor1&amp;lt;/code&amp;gt; stores 2850 (&amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt;) sorrelation matrices, one for each day of data.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s have a look at the correlation matrix for the last day, day 2853;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor1[,,dim(cor1)[3]]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            rIBM       rBP    rGOOG&lt;br /&gt;
## rIBM  1.0000000 0.2424297 0.353591&lt;br /&gt;
## rBP   0.2424297 1.0000000 0.275244&lt;br /&gt;
## rGOOG 0.3535910 0.2752440 1.000000&amp;lt;/pre&amp;gt;&lt;br /&gt;
So let&amp;#039;s say we want to plot the time-varying correlation between Google and BP, which is 0.275244 on that last day. In our matrix with returns &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt; BP is the second asset and Google the 3rd. So in any particular correlation matrix we want the element in row 2 and column 3.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor_BG &amp;amp;lt;- cor1[2,1,]   # leaving the last dimension empty implies that we want all elements&lt;br /&gt;
cor_BG &amp;amp;lt;- as.xts(cor_BG)  # imposes the xts time series format - useful for plotting&amp;lt;/pre&amp;gt;&lt;br /&gt;
And now we plot this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(cor_BG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-28-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you transformed &amp;lt;code&amp;gt;cor_BG&amp;lt;/code&amp;gt; to be a &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; series the &amp;lt;code&amp;gt;plot&amp;lt;/code&amp;gt; function automatically picks up the date information. As you can see there is significant variation through time with the correaltion typically varying between 0.2 and 0.5.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s plot all three correlations between the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
plot(as.xts(cor1[1,2,]),main=&amp;amp;quot;IBM and BP&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[1,3,]),main=&amp;amp;quot;IBM and Google&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[2,3,]),main=&amp;amp;quot;BP and Google&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-29-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Forecasts ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use your estimated model to produce forecasts for the covariance or correlation matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dccf1 &amp;amp;lt;- dccforecast(fit1, n.ahead = 10)&lt;br /&gt;
dccf1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       DCC GARCH Forecast        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Distribution         :  mvnorm&lt;br /&gt;
## Model                :  DCC(1,1)&lt;br /&gt;
## Horizon              :  10&lt;br /&gt;
## Roll Steps           :  0&lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast: &lt;br /&gt;
## &lt;br /&gt;
## First 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2539 0.3562&lt;br /&gt;
## [2,] 0.2539 1.0000 0.2883&lt;br /&gt;
## [3,] 0.3562 0.2883 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2658 0.3587&lt;br /&gt;
## [2,] 0.2658 1.0000 0.2909&lt;br /&gt;
## [3,] 0.3587 0.2909 1.0000&lt;br /&gt;
## &lt;br /&gt;
## . . .&lt;br /&gt;
## . . .&lt;br /&gt;
## &lt;br /&gt;
## Last 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3202 0.3703&lt;br /&gt;
## [2,] 0.3202 1.0000 0.3027&lt;br /&gt;
## [3,] 0.3703 0.3027 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3250 0.3714&lt;br /&gt;
## [2,] 0.3250 1.0000 0.3037&lt;br /&gt;
## [3,] 0.3714 0.3037 1.0000&amp;lt;/pre&amp;gt;&lt;br /&gt;
The actual forecasts for the correlation can be addresse via&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;Rf &amp;amp;lt;- dccf1@mforecast$R    # use H for the covariance forecast&amp;lt;/pre&amp;gt;&lt;br /&gt;
When checking the structure of &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(Rf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## List of 1&lt;br /&gt;
##  $ : num [1:3, 1:3, 1:10] 1 0.254 0.356 0.254 1 ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
you realise that the object &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt; is a list with one element. It turns out that this one list item is then a 3 dimensional matrix/array which contains the the 10 forecasts of &amp;lt;math&amp;gt;3 \times 3&amp;lt;/math&amp;gt; correlation matrices. If we want to extract, say, the 10 forecasts for the correlation between IBM (1st asset) and BP (2nd asset), we have to do this in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;corf_IB &amp;amp;lt;- Rf[[1]][1,2,]  # Correlation forecasts between IBM and BP&lt;br /&gt;
corf_IG &amp;amp;lt;- Rf[[1]][1,3,]  # Correlation forecasts between IBM and Google&lt;br /&gt;
corf_BG &amp;amp;lt;- Rf[[1]][2,3,]  # Correlation forecasts between BP and Google&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;[[1]]&amp;lt;/code&amp;gt; tells R to go to the first (and here only) list item and then &amp;lt;code&amp;gt;[1,2,]&amp;lt;/code&amp;gt; instructs R to select the (1,2) element of all available correlation matrices.&lt;br /&gt;
&lt;br /&gt;
As for the univariate volatililty model let us display the forecast along with the last in-sample estimates of correlation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
c_IB &amp;amp;lt;- c(tail(cor1[1,2,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IB &amp;amp;lt;- c(rep(NA,20),corf_IB) # gets the 10 forecasts&lt;br /&gt;
plot(c_IB,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and BP&amp;amp;quot;)&lt;br /&gt;
lines(cf_IB,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_IG &amp;amp;lt;- c(tail(cor1[1,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IG &amp;amp;lt;- c(rep(NA,20),corf_IG) # gets the 10 forecasts&lt;br /&gt;
plot(c_IG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_IG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_BG &amp;amp;lt;- c(tail(cor1[2,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_BG &amp;amp;lt;- c(rep(NA,20),corf_BG) # gets the 10 forecasts&lt;br /&gt;
plot(c_BG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation BP and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_BG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-34-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Further thoughts =&lt;br /&gt;
&lt;br /&gt;
If you are looking at using pseudo-out-of sample forecasting (i.e. pretend to forecast values that actually have already occured) you should explore the &amp;lt;code&amp;gt;out.sample&amp;lt;/code&amp;gt; option of the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package also allows you to estimate multivariate factor GARCH models and copula GARCH models (check the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for more details.&lt;br /&gt;
&lt;br /&gt;
An alternative package with a slightly different set of multivariate volatility models is the `&amp;lt;code&amp;gt;ccgarch&amp;lt;/code&amp;gt; package.&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:CondVar2.png&amp;diff=4241</id>
		<title>File:CondVar2.png</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:CondVar2.png&amp;diff=4241"/>
				<updated>2018-05-03T23:16:24Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: Rb uploaded a new version of File:CondVar2.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:CondVar2.png&amp;diff=4240</id>
		<title>File:CondVar2.png</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:CondVar2.png&amp;diff=4240"/>
				<updated>2018-05-03T23:15:56Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: Rb uploaded a new version of File:CondVar2.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:CondVar2.png&amp;diff=4239</id>
		<title>File:CondVar2.png</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:CondVar2.png&amp;diff=4239"/>
				<updated>2018-05-03T23:14:47Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4238</id>
		<title>R GARCH</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4238"/>
				<updated>2018-05-03T23:13:04Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /* Model Estimation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
When you are dealing with financial time-series we often have relatively high frequency observations available. It is very common for instance to have daily observations available. In fact it is now possible to obtain hourly, minute, second or even millisecond observations. But here we will restrict ourselves to daily observations. For some assets these will be 7 days a week observations, but for others these will be work-day observations, so typically 5 days a week of observations.&lt;br /&gt;
&lt;br /&gt;
= Packages used =&lt;br /&gt;
&lt;br /&gt;
There are a number of packages that can enable us to estimate volatility models. The packages we will use are the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; for univariate GARCH models and the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; (for multivariate models) package both written by Alexios Ghalanos. We shall also use the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package as it will give us some easy access to some standard financial data.&lt;br /&gt;
&lt;br /&gt;
So please ensure that you install these packes and then load them,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;#install.packages(c(&amp;amp;quot;quantmod&amp;amp;quot;,&amp;amp;quot;rugarch&amp;amp;quot;,&amp;amp;quot;rmgarch&amp;amp;quot;))   # only needed in case you have not yet installed these packages&lt;br /&gt;
library(quantmod)&lt;br /&gt;
library(rugarch)&lt;br /&gt;
library(rmgarch)&amp;lt;/pre&amp;gt;&lt;br /&gt;
Next we set our working directory&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# replace with your directory and uncomment&lt;br /&gt;
# setwd(&amp;amp;quot;YOUR/COPLETE/DIRECTORY/PATH&amp;amp;quot;) &amp;lt;/pre&amp;gt;&lt;br /&gt;
= Data upload =&lt;br /&gt;
&lt;br /&gt;
Here we will use a convenient data retrieval function (&amp;lt;code&amp;gt;getSymbols&amp;lt;/code&amp;gt;) delivered by the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package in order to retrieve some data. This function works, for instance, to retrieve stock data. The default source is [https://finance.yahoo.com/ Yahoo Finance]. If you want to find out what stock has which symbol you should be able to search the internet to find a list of ticker symbols. The following shows how to use the function. But note that my experience is that sometimes the connection does not work and you may get an error message. In that case just retry a few seconds later and it may well work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;startDate = as.Date(&amp;amp;quot;2007-01-03&amp;amp;quot;) #Specify period of time we are interested in&lt;br /&gt;
endDate = as.Date(&amp;amp;quot;2018-04-30&amp;amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
getSymbols(&amp;amp;quot;IBM&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;IBM&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;GOOG&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;GOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;BP&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;BP&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
In your environment you can see that each of these commands loads an object with the respective ticker symbol name. Let&amp;#039;s have a look at one of these dataframes to understand what data these are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;head(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            IBM.Open IBM.High IBM.Low IBM.Close IBM.Volume IBM.Adjusted&lt;br /&gt;
## 2007-01-03    97.18    98.40   96.26     97.27    9196800     73.41806&lt;br /&gt;
## 2007-01-04    97.25    98.79   96.88     98.31   10524500     74.20306&lt;br /&gt;
## 2007-01-05    97.60    97.95   96.91     97.42    7221300     73.53130&lt;br /&gt;
## 2007-01-08    98.50    99.50   98.35     98.90   10340000     74.64834&lt;br /&gt;
## 2007-01-09    99.08   100.33   99.07    100.07   11108200     75.53147&lt;br /&gt;
## 2007-01-10    98.50    99.05   97.93     98.89    8744800     74.64082&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## An &amp;#039;xts&amp;#039; object on 2007-01-03/2018-04-27 containing:&lt;br /&gt;
##   Data: num [1:2850, 1:6] 97.2 97.2 97.6 98.5 99.1 ...&lt;br /&gt;
##  - attr(*, &amp;amp;quot;dimnames&amp;amp;quot;)=List of 2&lt;br /&gt;
##   ..$ : NULL&lt;br /&gt;
##   ..$ : chr [1:6] &amp;amp;quot;IBM.Open&amp;amp;quot; &amp;amp;quot;IBM.High&amp;amp;quot; &amp;amp;quot;IBM.Low&amp;amp;quot; &amp;amp;quot;IBM.Close&amp;amp;quot; ...&lt;br /&gt;
##   Indexed by objects of class: [Date] TZ: UTC&lt;br /&gt;
##   xts Attributes:  &lt;br /&gt;
## List of 2&lt;br /&gt;
##  $ src    : chr &amp;amp;quot;yahoo&amp;amp;quot;&lt;br /&gt;
##  $ updated: POSIXct[1:1], format: &amp;amp;quot;2018-05-03 22:21:00&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can see that this object contains a range of daily observations (&amp;lt;code&amp;gt;Open&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;High&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Close&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Volume&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Adjusted&amp;lt;/code&amp;gt; share price). We also learn that the object is formatted as an &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; object. &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; is a type of time-series format and indeed we learn that the data range from 2007-01-03 to 2018-04-30.&lt;br /&gt;
&lt;br /&gt;
You can in fact produce a somewhat fancy looking chart with the following command (also part of the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;chartSeries(GOOG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GoogleChart1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When we are estimating volatility models we work with returns. There is a function that transforms the data to returns.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;rIBM &amp;amp;lt;- dailyReturn(IBM)&lt;br /&gt;
rBP &amp;amp;lt;- dailyReturn(BP)&lt;br /&gt;
rGOOG &amp;amp;lt;- dailyReturn(GOOG)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# We put all data into a data frame for use in the multivariate model&lt;br /&gt;
rX &amp;amp;lt;- data.frame(rIBM, rBP, rGOOG)&lt;br /&gt;
names(rX)[1] &amp;amp;lt;- &amp;amp;quot;rIBM&amp;amp;quot;&lt;br /&gt;
names(rX)[2] &amp;amp;lt;- &amp;amp;quot;rBP&amp;amp;quot;&lt;br /&gt;
names(rX)[3] &amp;amp;lt;- &amp;amp;quot;rGOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
There is also a &amp;lt;code&amp;gt;weeklyReturn&amp;lt;/code&amp;gt; function in case that is what you are interested in.&lt;br /&gt;
&lt;br /&gt;
= Univariate GARCH Model =&lt;br /&gt;
&lt;br /&gt;
Here we are using the functionality provided by the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package written by Alexios Galanos.&lt;br /&gt;
&lt;br /&gt;
== Model Specification ==&lt;br /&gt;
&lt;br /&gt;
The first thing you need to do is to ensure you know what type of GARCH model you want to estimate and then let R know about this. It is the &amp;lt;code&amp;gt;ugarchspec( )&amp;lt;/code&amp;gt; function which is used to let R know about the model type. There is in fact a default specification and the way to invoke this is as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec = ugarchspec()&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; is now a list which contains all the relevant model specifications. Let&amp;#039;s look at them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       GARCH Model Spec          *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## GARCH Model      : sGARCH(1,1)&lt;br /&gt;
## Variance Targeting   : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Mean Dynamics&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Mean Model       : ARFIMA(1,0,1)&lt;br /&gt;
## Include Mean     : TRUE &lt;br /&gt;
## GARCH-in-Mean        : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Distribution&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Distribution :  norm &lt;br /&gt;
## Includes Skew    :  FALSE &lt;br /&gt;
## Includes Shape   :  FALSE &lt;br /&gt;
## Includes Lambda  :  FALSE&amp;lt;/pre&amp;gt;&lt;br /&gt;
The key issues here are the spec for the &amp;lt;code&amp;gt;Mean Model&amp;lt;/code&amp;gt; (here an ARMA(1,1) model) and the specification for the &amp;lt;code&amp;gt;GARCH Model&amp;lt;/code&amp;gt;, here an &amp;lt;code&amp;gt;sGARCH(1,1)&amp;lt;/code&amp;gt; which is basically a GARCH(1,1). To get details on all the possible specifications and how to change them it is best to consult the [https://cran.r-project.org/web/packages/rugarch/vignettes/Introduction_to_the_rugarch_package.pdf documentation] of the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s say you want to change the mean model from an ARMA(1,1) to an ARMA(1,0), i.e. an AR(1) model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec &amp;amp;lt;- ugarchspec(mean.model=list(armaOrder=c(1,0)))&amp;lt;/pre&amp;gt;&lt;br /&gt;
You could call &amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; again to check that the model specification has actually changed.&lt;br /&gt;
&lt;br /&gt;
The following is the specification for an # an example of the EWMA Model (although we will not use it below).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ewma_spec = ugarchspec(variance.model=list(model=&amp;amp;quot;iGARCH&amp;amp;quot;, garchOrder=c(1,1)), &lt;br /&gt;
        mean.model=list(armaOrder=c(0,0), include.mean=TRUE),  &lt;br /&gt;
        distribution.model=&amp;amp;quot;norm&amp;amp;quot;, fixed.pars=list(omega=0))&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now that we have specified a model to estimate we need to find the best arameters, i.e. we need to estimate the model. This step is achieved by the &amp;lt;code&amp;gt;ugarchfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit = ugarchfit(spec = ug_spec, data = rIBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;fit&amp;lt;/code&amp;gt; is now a list that contains a range of results from the estimation. Let&amp;#039;s have a look at the results&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *          GARCH Model Fit        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## GARCH Model  : sGARCH(1,1)&lt;br /&gt;
## Mean Model   : ARFIMA(1,0,0)&lt;br /&gt;
## Distribution : norm &lt;br /&gt;
## &lt;br /&gt;
## Optimal Parameters&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##         Estimate  Std. Error   t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000220   1.55666  0.11955&lt;br /&gt;
## ar1    -0.013463    0.021425  -0.62835  0.52978&lt;br /&gt;
## omega   0.000015    0.000002   6.56888  0.00000&lt;br /&gt;
## alpha1  0.111158    0.006440  17.25930  0.00000&lt;br /&gt;
## beta1   0.809517    0.005883 137.59775  0.00000&lt;br /&gt;
## &lt;br /&gt;
## Robust Standard Errors:&lt;br /&gt;
##         Estimate  Std. Error  t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000230  1.48654 0.137136&lt;br /&gt;
## ar1    -0.013463    0.019583 -0.68748 0.491782&lt;br /&gt;
## omega   0.000015    0.000012  1.25867 0.208150&lt;br /&gt;
## alpha1  0.111158    0.054637  2.03450 0.041901&lt;br /&gt;
## beta1   0.809517    0.082783  9.77876 0.000000&lt;br /&gt;
## &lt;br /&gt;
## LogLikelihood : 8364.692 &lt;br /&gt;
## &lt;br /&gt;
## Information Criteria&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                     &lt;br /&gt;
## Akaike       -5.8665&lt;br /&gt;
## Bayes        -5.8560&lt;br /&gt;
## Shibata      -5.8665&lt;br /&gt;
## Hannan-Quinn -5.8627&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                    0.03483  0.8519&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][2]   0.03492  1.0000&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][5]   1.39601  0.8712&lt;br /&gt;
## d.o.f=1&lt;br /&gt;
## H0 : No serial correlation&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Squared Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                     0.2509  0.6165&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][5]    1.2795  0.7938&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][9]    1.9518  0.9107&lt;br /&gt;
## d.o.f=2&lt;br /&gt;
## &lt;br /&gt;
## Weighted ARCH LM Tests&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##             Statistic Shape Scale P-Value&lt;br /&gt;
## ARCH Lag[3]     1.295 0.500 2.000  0.2551&lt;br /&gt;
## ARCH Lag[5]     1.603 1.440 1.667  0.5656&lt;br /&gt;
## ARCH Lag[7]     1.935 2.315 1.543  0.7312&lt;br /&gt;
## &lt;br /&gt;
## Nyblom stability test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Joint Statistic:  26.6709&lt;br /&gt;
## Individual Statistics:              &lt;br /&gt;
## mu     0.42613&lt;br /&gt;
## ar1    0.06712&lt;br /&gt;
## omega  0.89209&lt;br /&gt;
## alpha1 0.55216&lt;br /&gt;
## beta1  0.15390&lt;br /&gt;
## &lt;br /&gt;
## Asymptotic Critical Values (10% 5% 1%)&lt;br /&gt;
## Joint Statistic:          1.28 1.47 1.88&lt;br /&gt;
## Individual Statistic:     0.35 0.47 0.75&lt;br /&gt;
## &lt;br /&gt;
## Sign Bias Test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                    t-value   prob sig&lt;br /&gt;
## Sign Bias           0.2134 0.8310    &lt;br /&gt;
## Negative Sign Bias  1.0137 0.3108    &lt;br /&gt;
## Positive Sign Bias  0.4427 0.6580    &lt;br /&gt;
## Joint Effect        1.6909 0.6390    &lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Adjusted Pearson Goodness-of-Fit Test:&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##   group statistic p-value(g-1)&lt;br /&gt;
## 1    20     135.6    1.285e-19&lt;br /&gt;
## 2    30     139.3    2.301e-16&lt;br /&gt;
## 3    40     161.8    6.871e-17&lt;br /&gt;
## 4    50     166.2    1.164e-14&lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Elapsed time : 0.7440431&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are familiar with GARCH models you will recognise some of the parameters. &amp;lt;code&amp;gt;ar1&amp;lt;/code&amp;gt; is the AR1 coefficient of the mean model (here very small and basically insignificant), &amp;lt;code&amp;gt;alpha1&amp;lt;/code&amp;gt; is the coefficient to the squared residuals in the GARCH equation and &amp;lt;code&amp;gt;beta1&amp;lt;/code&amp;gt; is the coefficient to the lagged variance.&lt;br /&gt;
&lt;br /&gt;
Often you will want to use model output for some further analysis. It is therefore important to understand how to extract information such as the parameter estimates, their standard errors or the residuals. The object &amp;lt;code&amp;gt;ugfit&amp;lt;/code&amp;gt; contains all the information. In that object you can find two drawers (or in technical speak slots, @fit and @model). Each of these drawers contains a range of different things. What they contain you can figure out by asking for the element names&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @model slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @model slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@model)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;modelinc&amp;amp;quot;   &amp;amp;quot;modeldesc&amp;amp;quot;  &amp;amp;quot;modeldata&amp;amp;quot;  &amp;amp;quot;pars&amp;amp;quot;       &amp;amp;quot;start.pars&amp;amp;quot;&lt;br /&gt;
##  [6] &amp;amp;quot;fixed.pars&amp;amp;quot; &amp;amp;quot;maxOrder&amp;amp;quot;   &amp;amp;quot;pos.matrix&amp;amp;quot; &amp;amp;quot;fmodel&amp;amp;quot;     &amp;amp;quot;pidx&amp;amp;quot;      &lt;br /&gt;
## [11] &amp;amp;quot;n.start&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @fit slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @fit slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@fit)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;hessian&amp;amp;quot;         &amp;amp;quot;cvar&amp;amp;quot;            &amp;amp;quot;var&amp;amp;quot;            &lt;br /&gt;
##  [4] &amp;amp;quot;sigma&amp;amp;quot;           &amp;amp;quot;condH&amp;amp;quot;           &amp;amp;quot;z&amp;amp;quot;              &lt;br /&gt;
##  [7] &amp;amp;quot;LLH&amp;amp;quot;             &amp;amp;quot;log.likelihoods&amp;amp;quot; &amp;amp;quot;residuals&amp;amp;quot;      &lt;br /&gt;
## [10] &amp;amp;quot;coef&amp;amp;quot;            &amp;amp;quot;robust.cvar&amp;amp;quot;     &amp;amp;quot;A&amp;amp;quot;              &lt;br /&gt;
## [13] &amp;amp;quot;B&amp;amp;quot;               &amp;amp;quot;scores&amp;amp;quot;          &amp;amp;quot;se.coef&amp;amp;quot;        &lt;br /&gt;
## [16] &amp;amp;quot;tval&amp;amp;quot;            &amp;amp;quot;matcoef&amp;amp;quot;         &amp;amp;quot;robust.se.coef&amp;amp;quot; &lt;br /&gt;
## [19] &amp;amp;quot;robust.tval&amp;amp;quot;     &amp;amp;quot;robust.matcoef&amp;amp;quot;  &amp;amp;quot;fitted.values&amp;amp;quot;  &lt;br /&gt;
## [22] &amp;amp;quot;convergence&amp;amp;quot;     &amp;amp;quot;kappa&amp;amp;quot;           &amp;amp;quot;persistence&amp;amp;quot;    &lt;br /&gt;
## [25] &amp;amp;quot;timer&amp;amp;quot;           &amp;amp;quot;ipars&amp;amp;quot;           &amp;amp;quot;solver&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you wanted to extract the estimated coefficients you would do that in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit@fit$coef&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            mu           ar1         omega        alpha1         beta1 &lt;br /&gt;
##  3.419000e-04 -1.346260e-02  1.516946e-05  1.111584e-01  8.095171e-01&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var &amp;amp;lt;- ugfit@fit$var   # save the estimated conditional variances&lt;br /&gt;
ug_res2 &amp;amp;lt;- (ugfit@fit$residuals)^2   # save the estimated squared residuals&amp;lt;/pre&amp;gt;&lt;br /&gt;
Let&amp;#039;s plot the squared residuals and the estimated conditional variance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(ug_res2, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_var, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/CondVar2.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Forecasting ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use an estimated model to subsequently forecast the conditional variance. The function used for this purpose is the &amp;lt;code&amp;gt;ugarchforecast&amp;lt;/code&amp;gt; function. The application is rather straightforward:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfore &amp;amp;lt;- ugarchforecast(ugfit, n.ahead = 10)&lt;br /&gt;
ugfore&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## *       GARCH Model Forecast         *&lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## Model: sGARCH&lt;br /&gt;
## Horizon: 10&lt;br /&gt;
## Roll Steps: 0&lt;br /&gt;
## Out of Sample: 0&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast [T0=2018-04-27]:&lt;br /&gt;
##         Series   Sigma&lt;br /&gt;
## T+1  0.0003685 0.01640&lt;br /&gt;
## T+2  0.0003415 0.01621&lt;br /&gt;
## T+3  0.0003419 0.01604&lt;br /&gt;
## T+4  0.0003419 0.01587&lt;br /&gt;
## T+5  0.0003419 0.01572&lt;br /&gt;
## T+6  0.0003419 0.01558&lt;br /&gt;
## T+7  0.0003419 0.01545&lt;br /&gt;
## T+8  0.0003419 0.01533&lt;br /&gt;
## T+9  0.0003419 0.01521&lt;br /&gt;
## T+10 0.0003419 0.01511&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see we have produced forecasts for the next ten days, both for the expected returns (&amp;lt;code&amp;gt;Series&amp;lt;/code&amp;gt;) and for the conditional volatility (square root of the conditional variance). Similar to the object created for model fitting, &amp;lt;code&amp;gt;ugfore&amp;lt;/code&amp;gt; contains two slots (@model and @forecast) and you can use &amp;lt;code&amp;gt;names(ugfore@forecast)&amp;lt;/code&amp;gt; to figure out under which names the elements are saved. For instance you can extract the conditional volatility forecast as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_f &amp;amp;lt;- ugfore@forecast$sigmaFor&lt;br /&gt;
plot(ug_f, type = &amp;amp;quot;l&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-18-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the volatility is the square root of the conditional variance.&lt;br /&gt;
&lt;br /&gt;
To put these forecasts into context let&amp;#039;s display them together with the last 50 observations used in the estimation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var_t &amp;amp;lt;- c(tail(ug_var,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_res2_t &amp;amp;lt;- c(tail(ug_res2,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_f &amp;amp;lt;- c(rep(NA,20),(ug_f)^2)&lt;br /&gt;
&lt;br /&gt;
plot(ug_res2_t, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_f, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
lines(ug_var_t, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-19-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see how the forecast of the conditional variance picks up from the last estimated conditional variance. In fact it decreases from there, slowly, towards the unconditional variance value.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package has a lot of additional functionality which you can explore through the documentation.&lt;br /&gt;
&lt;br /&gt;
= Multivariate GARCH models =&lt;br /&gt;
&lt;br /&gt;
Often you will want to model the volatility of a vector of assets. This can be done with the multivariate equivalent of the univariate GARCH model. Estimating multivariate GARCH models turns out to be significantly more difficult than univariate GARCH models, but fortunately procedures have been developed that deal with most of these issues.&lt;br /&gt;
&lt;br /&gt;
Here we are using the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package which has a lot of useful functionality. We are applying it to estimate a multivariate volatility model for the returns of BP, Google/Alphabet and IBM shares.&lt;br /&gt;
&lt;br /&gt;
As for the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package we first need to specify the model we want to estimate. Here we stick with a Dynamic Conditional Correlation (DCC) model (see the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for details.). When estimating DCC models one basically estimates individual GARCH-type models (which could differ for each individual asset). These are then used to standardise the individual residuals. As a second step one then has to specify the correlation dynamics of these standardised residuals. It is possible to estimate the parameters of the univariate and the correlation model in one big swoop. however, my experience with this, and other packages, is that it is beneficial to separate the two steps.&lt;br /&gt;
&lt;br /&gt;
== Model Setup ==&lt;br /&gt;
&lt;br /&gt;
Here we assume that we are using the same univariate volatility model specification for each of the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# DCC (MVN)&lt;br /&gt;
uspec.n = multispec(replicate(3, ugarchspec(mean.model = list(armaOrder = c(1,0)))))&amp;lt;/pre&amp;gt;&lt;br /&gt;
What does this command do? You will recognise that &amp;lt;code&amp;gt;ugarchspec(mean.model = list(armaOrder = c(1,0)))&amp;lt;/code&amp;gt; specifies an AR(1)-GARCH(1,1) model. By using &amp;lt;code&amp;gt;replicate(3, ugarchspec...)&amp;lt;/code&amp;gt; we replicate this model 3 times (as we have three assets, IBM, Google/Alphabet and BP).&lt;br /&gt;
&lt;br /&gt;
We now estimate these univariate GARCH models using the &amp;lt;code&amp;gt;multifit&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;multf = multifit(uspec.n, rX)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The results are saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; and you can type &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; into the command window to see the estimated parameters for these three models. But we will here proceed to specify the DCC model (I assume that you know what a DCC model is. This is not the place to elaborate on this and many textbooks or indeed the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] to this package provide details). To specify the correlation specification we use the &amp;lt;code&amp;gt;dccspec&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;spec1 = dccspec(uspec = uspec.n, dccOrder = c(1, 1), distribution = &amp;#039;mvnorm&amp;#039;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this specification we have to state how the univariate volatilities are modeled (as per &amp;lt;code&amp;gt;uspec.n&amp;lt;/code&amp;gt;) and how complex the dynamic structure of the correlation matrix is (here we are using the most standard &amp;lt;code&amp;gt;dccOrder = c(1, 1)&amp;lt;/code&amp;gt; specification).&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now we are in a position to estimate the model using the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;fit1 = dccfit(spec1, data = rX, fit.control = list(eval.se = TRUE), fit = multf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
We want to estimate the model as specified in &amp;lt;code&amp;gt;spec1&amp;lt;/code&amp;gt;, using the data in &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt;. The option &amp;lt;code&amp;gt;fit.control = list(eval.se = TRUE)&amp;lt;/code&amp;gt; ensures that the estimation procedure produces standard errors for estimated parameters. Importantly &amp;lt;code&amp;gt;fit = multf&amp;lt;/code&amp;gt; indicates that we ought to use the already estimated univariate models as they were saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt;. The way to learn how to use these functions is by a combination of looking at the functions&amp;#039;s help (&amp;lt;code&amp;gt;?dccfit&amp;lt;/code&amp;gt;) and googling.&lt;br /&gt;
&lt;br /&gt;
When you estimate a multivariate volatility model like the DCC model you are typically interested in the estimated covariance or correlation matrices. After all it is at the core of these models that you allow for time-variation in the correlation between the assets (there are also constant correlation models, but we do not discuss this here). Therefore we will now learn how we extract these.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# Get the model based time varying covariance (arrays) and correlation matrices&lt;br /&gt;
cov1 = rcov(fit1)  # extracts the covariance matrix&lt;br /&gt;
cor1 = rcor(fit1)  # extracts the correlation matrix&amp;lt;/pre&amp;gt;&lt;br /&gt;
To understand the object we have at our hands here we can have a look at the imension:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dim(cor1)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1]    3    3 2850&amp;lt;/pre&amp;gt;&lt;br /&gt;
We get three outputs which tells us that we have a three dimensional object. The firts two dimensions have 3 elements each (think a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; correlation matrix) and then there is a third dimension with 2850 elements. This tells us that &amp;lt;code&amp;gt;cor1&amp;lt;/code&amp;gt; stores 2850 (&amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt;) sorrelation matrices, one for each day of data.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s have a look at the correlation matrix for the last day, day 2853;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor1[,,dim(cor1)[3]]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            rIBM       rBP    rGOOG&lt;br /&gt;
## rIBM  1.0000000 0.2424297 0.353591&lt;br /&gt;
## rBP   0.2424297 1.0000000 0.275244&lt;br /&gt;
## rGOOG 0.3535910 0.2752440 1.000000&amp;lt;/pre&amp;gt;&lt;br /&gt;
So let&amp;#039;s say we want to plot the time-varying correlation between Google and BP, which is 0.275244 on that last day. In our matrix with returns &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt; BP is the second asset and Google the 3rd. So in any particular correlation matrix we want the element in row 2 and column 3.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor_BG &amp;amp;lt;- cor1[2,1,]   # leaving the last dimension empty implies that we want all elements&lt;br /&gt;
cor_BG &amp;amp;lt;- as.xts(cor_BG)  # imposes the xts time series format - useful for plotting&amp;lt;/pre&amp;gt;&lt;br /&gt;
And now we plot this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(cor_BG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-28-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you transformed &amp;lt;code&amp;gt;cor_BG&amp;lt;/code&amp;gt; to be a &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; series the &amp;lt;code&amp;gt;plot&amp;lt;/code&amp;gt; function automatically picks up the date information. As you can see there is significant variation through time with the correaltion typically varying between 0.2 and 0.5.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s plot all three correlations between the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
plot(as.xts(cor1[1,2,]),main=&amp;amp;quot;IBM and BP&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[1,3,]),main=&amp;amp;quot;IBM and Google&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[2,3,]),main=&amp;amp;quot;BP and Google&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-29-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Forecasts ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use your estimated model to produce forecasts for the covariance or correlation matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dccf1 &amp;amp;lt;- dccforecast(fit1, n.ahead = 10)&lt;br /&gt;
dccf1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       DCC GARCH Forecast        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Distribution         :  mvnorm&lt;br /&gt;
## Model                :  DCC(1,1)&lt;br /&gt;
## Horizon              :  10&lt;br /&gt;
## Roll Steps           :  0&lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast: &lt;br /&gt;
## &lt;br /&gt;
## First 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2539 0.3562&lt;br /&gt;
## [2,] 0.2539 1.0000 0.2883&lt;br /&gt;
## [3,] 0.3562 0.2883 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2658 0.3587&lt;br /&gt;
## [2,] 0.2658 1.0000 0.2909&lt;br /&gt;
## [3,] 0.3587 0.2909 1.0000&lt;br /&gt;
## &lt;br /&gt;
## . . .&lt;br /&gt;
## . . .&lt;br /&gt;
## &lt;br /&gt;
## Last 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3202 0.3703&lt;br /&gt;
## [2,] 0.3202 1.0000 0.3027&lt;br /&gt;
## [3,] 0.3703 0.3027 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3250 0.3714&lt;br /&gt;
## [2,] 0.3250 1.0000 0.3037&lt;br /&gt;
## [3,] 0.3714 0.3037 1.0000&amp;lt;/pre&amp;gt;&lt;br /&gt;
The actual forecasts for the correlation can be addresse via&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;Rf &amp;amp;lt;- dccf1@mforecast$R    # use H for the covariance forecast&amp;lt;/pre&amp;gt;&lt;br /&gt;
When checking the structure of &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(Rf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## List of 1&lt;br /&gt;
##  $ : num [1:3, 1:3, 1:10] 1 0.254 0.356 0.254 1 ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
you realise that the object &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt; is a list with one element. It turns out that this one list item is then a 3 dimensional matrix/array which contains the the 10 forecasts of &amp;lt;math&amp;gt;3 \times 3&amp;lt;/math&amp;gt; correlation matrices. If we want to extract, say, the 10 forecasts for the correlation between IBM (1st asset) and BP (2nd asset), we have to do this in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;corf_IB &amp;amp;lt;- Rf[[1]][1,2,]  # Correlation forecasts between IBM and BP&lt;br /&gt;
corf_IG &amp;amp;lt;- Rf[[1]][1,3,]  # Correlation forecasts between IBM and Google&lt;br /&gt;
corf_BG &amp;amp;lt;- Rf[[1]][2,3,]  # Correlation forecasts between BP and Google&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;[[1]]&amp;lt;/code&amp;gt; tells R to go to the first (and here only) list item and then &amp;lt;code&amp;gt;[1,2,]&amp;lt;/code&amp;gt; instructs R to select the (1,2) element of all available correlation matrices.&lt;br /&gt;
&lt;br /&gt;
As for the univariate volatililty model let us display the forecast along with the last in-sample estimates of correlation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
c_IB &amp;amp;lt;- c(tail(cor1[1,2,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IB &amp;amp;lt;- c(rep(NA,20),corf_IB) # gets the 10 forecasts&lt;br /&gt;
plot(c_IB,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and BP&amp;amp;quot;)&lt;br /&gt;
lines(cf_IB,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_IG &amp;amp;lt;- c(tail(cor1[1,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IG &amp;amp;lt;- c(rep(NA,20),corf_IG) # gets the 10 forecasts&lt;br /&gt;
plot(c_IG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_IG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_BG &amp;amp;lt;- c(tail(cor1[2,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_BG &amp;amp;lt;- c(rep(NA,20),corf_BG) # gets the 10 forecasts&lt;br /&gt;
plot(c_BG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation BP and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_BG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-34-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Further thoughts =&lt;br /&gt;
&lt;br /&gt;
If you are looking at using pseudo-out-of sample forecasting (i.e. pretend to forecast values that actually have already occured) you should explore the &amp;lt;code&amp;gt;out.sample&amp;lt;/code&amp;gt; option of the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package also allows you to estimate multivariate factor GARCH models and copula GARCH models (check the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for more details.&lt;br /&gt;
&lt;br /&gt;
An alternative package with a slightly different set of multivariate volatility models is the `&amp;lt;code&amp;gt;ccgarch&amp;lt;/code&amp;gt; package.&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4237</id>
		<title>R GARCH</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4237"/>
				<updated>2018-05-03T23:11:24Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /* Model Specification */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
When you are dealing with financial time-series we often have relatively high frequency observations available. It is very common for instance to have daily observations available. In fact it is now possible to obtain hourly, minute, second or even millisecond observations. But here we will restrict ourselves to daily observations. For some assets these will be 7 days a week observations, but for others these will be work-day observations, so typically 5 days a week of observations.&lt;br /&gt;
&lt;br /&gt;
= Packages used =&lt;br /&gt;
&lt;br /&gt;
There are a number of packages that can enable us to estimate volatility models. The packages we will use are the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; for univariate GARCH models and the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; (for multivariate models) package both written by Alexios Ghalanos. We shall also use the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package as it will give us some easy access to some standard financial data.&lt;br /&gt;
&lt;br /&gt;
So please ensure that you install these packes and then load them,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;#install.packages(c(&amp;amp;quot;quantmod&amp;amp;quot;,&amp;amp;quot;rugarch&amp;amp;quot;,&amp;amp;quot;rmgarch&amp;amp;quot;))   # only needed in case you have not yet installed these packages&lt;br /&gt;
library(quantmod)&lt;br /&gt;
library(rugarch)&lt;br /&gt;
library(rmgarch)&amp;lt;/pre&amp;gt;&lt;br /&gt;
Next we set our working directory&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# replace with your directory and uncomment&lt;br /&gt;
# setwd(&amp;amp;quot;YOUR/COPLETE/DIRECTORY/PATH&amp;amp;quot;) &amp;lt;/pre&amp;gt;&lt;br /&gt;
= Data upload =&lt;br /&gt;
&lt;br /&gt;
Here we will use a convenient data retrieval function (&amp;lt;code&amp;gt;getSymbols&amp;lt;/code&amp;gt;) delivered by the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package in order to retrieve some data. This function works, for instance, to retrieve stock data. The default source is [https://finance.yahoo.com/ Yahoo Finance]. If you want to find out what stock has which symbol you should be able to search the internet to find a list of ticker symbols. The following shows how to use the function. But note that my experience is that sometimes the connection does not work and you may get an error message. In that case just retry a few seconds later and it may well work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;startDate = as.Date(&amp;amp;quot;2007-01-03&amp;amp;quot;) #Specify period of time we are interested in&lt;br /&gt;
endDate = as.Date(&amp;amp;quot;2018-04-30&amp;amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
getSymbols(&amp;amp;quot;IBM&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;IBM&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;GOOG&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;GOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;BP&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;BP&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
In your environment you can see that each of these commands loads an object with the respective ticker symbol name. Let&amp;#039;s have a look at one of these dataframes to understand what data these are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;head(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            IBM.Open IBM.High IBM.Low IBM.Close IBM.Volume IBM.Adjusted&lt;br /&gt;
## 2007-01-03    97.18    98.40   96.26     97.27    9196800     73.41806&lt;br /&gt;
## 2007-01-04    97.25    98.79   96.88     98.31   10524500     74.20306&lt;br /&gt;
## 2007-01-05    97.60    97.95   96.91     97.42    7221300     73.53130&lt;br /&gt;
## 2007-01-08    98.50    99.50   98.35     98.90   10340000     74.64834&lt;br /&gt;
## 2007-01-09    99.08   100.33   99.07    100.07   11108200     75.53147&lt;br /&gt;
## 2007-01-10    98.50    99.05   97.93     98.89    8744800     74.64082&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## An &amp;#039;xts&amp;#039; object on 2007-01-03/2018-04-27 containing:&lt;br /&gt;
##   Data: num [1:2850, 1:6] 97.2 97.2 97.6 98.5 99.1 ...&lt;br /&gt;
##  - attr(*, &amp;amp;quot;dimnames&amp;amp;quot;)=List of 2&lt;br /&gt;
##   ..$ : NULL&lt;br /&gt;
##   ..$ : chr [1:6] &amp;amp;quot;IBM.Open&amp;amp;quot; &amp;amp;quot;IBM.High&amp;amp;quot; &amp;amp;quot;IBM.Low&amp;amp;quot; &amp;amp;quot;IBM.Close&amp;amp;quot; ...&lt;br /&gt;
##   Indexed by objects of class: [Date] TZ: UTC&lt;br /&gt;
##   xts Attributes:  &lt;br /&gt;
## List of 2&lt;br /&gt;
##  $ src    : chr &amp;amp;quot;yahoo&amp;amp;quot;&lt;br /&gt;
##  $ updated: POSIXct[1:1], format: &amp;amp;quot;2018-05-03 22:21:00&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can see that this object contains a range of daily observations (&amp;lt;code&amp;gt;Open&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;High&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Close&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Volume&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Adjusted&amp;lt;/code&amp;gt; share price). We also learn that the object is formatted as an &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; object. &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; is a type of time-series format and indeed we learn that the data range from 2007-01-03 to 2018-04-30.&lt;br /&gt;
&lt;br /&gt;
You can in fact produce a somewhat fancy looking chart with the following command (also part of the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;chartSeries(GOOG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GoogleChart1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When we are estimating volatility models we work with returns. There is a function that transforms the data to returns.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;rIBM &amp;amp;lt;- dailyReturn(IBM)&lt;br /&gt;
rBP &amp;amp;lt;- dailyReturn(BP)&lt;br /&gt;
rGOOG &amp;amp;lt;- dailyReturn(GOOG)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# We put all data into a data frame for use in the multivariate model&lt;br /&gt;
rX &amp;amp;lt;- data.frame(rIBM, rBP, rGOOG)&lt;br /&gt;
names(rX)[1] &amp;amp;lt;- &amp;amp;quot;rIBM&amp;amp;quot;&lt;br /&gt;
names(rX)[2] &amp;amp;lt;- &amp;amp;quot;rBP&amp;amp;quot;&lt;br /&gt;
names(rX)[3] &amp;amp;lt;- &amp;amp;quot;rGOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
There is also a &amp;lt;code&amp;gt;weeklyReturn&amp;lt;/code&amp;gt; function in case that is what you are interested in.&lt;br /&gt;
&lt;br /&gt;
= Univariate GARCH Model =&lt;br /&gt;
&lt;br /&gt;
Here we are using the functionality provided by the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package written by Alexios Galanos.&lt;br /&gt;
&lt;br /&gt;
== Model Specification ==&lt;br /&gt;
&lt;br /&gt;
The first thing you need to do is to ensure you know what type of GARCH model you want to estimate and then let R know about this. It is the &amp;lt;code&amp;gt;ugarchspec( )&amp;lt;/code&amp;gt; function which is used to let R know about the model type. There is in fact a default specification and the way to invoke this is as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec = ugarchspec()&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; is now a list which contains all the relevant model specifications. Let&amp;#039;s look at them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       GARCH Model Spec          *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## GARCH Model      : sGARCH(1,1)&lt;br /&gt;
## Variance Targeting   : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Mean Dynamics&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Mean Model       : ARFIMA(1,0,1)&lt;br /&gt;
## Include Mean     : TRUE &lt;br /&gt;
## GARCH-in-Mean        : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Distribution&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Distribution :  norm &lt;br /&gt;
## Includes Skew    :  FALSE &lt;br /&gt;
## Includes Shape   :  FALSE &lt;br /&gt;
## Includes Lambda  :  FALSE&amp;lt;/pre&amp;gt;&lt;br /&gt;
The key issues here are the spec for the &amp;lt;code&amp;gt;Mean Model&amp;lt;/code&amp;gt; (here an ARMA(1,1) model) and the specification for the &amp;lt;code&amp;gt;GARCH Model&amp;lt;/code&amp;gt;, here an &amp;lt;code&amp;gt;sGARCH(1,1)&amp;lt;/code&amp;gt; which is basically a GARCH(1,1). To get details on all the possible specifications and how to change them it is best to consult the [https://cran.r-project.org/web/packages/rugarch/vignettes/Introduction_to_the_rugarch_package.pdf documentation] of the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s say you want to change the mean model from an ARMA(1,1) to an ARMA(1,0), i.e. an AR(1) model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec &amp;amp;lt;- ugarchspec(mean.model=list(armaOrder=c(1,0)))&amp;lt;/pre&amp;gt;&lt;br /&gt;
You could call &amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; again to check that the model specification has actually changed.&lt;br /&gt;
&lt;br /&gt;
The following is the specification for an # an example of the EWMA Model (although we will not use it below).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ewma_spec = ugarchspec(variance.model=list(model=&amp;amp;quot;iGARCH&amp;amp;quot;, garchOrder=c(1,1)), &lt;br /&gt;
        mean.model=list(armaOrder=c(0,0), include.mean=TRUE),  &lt;br /&gt;
        distribution.model=&amp;amp;quot;norm&amp;amp;quot;, fixed.pars=list(omega=0))&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now that we have specified a model to estimate we need to find the best arameters, i.e. we need to estimate the model. This step is achieved by the &amp;lt;code&amp;gt;ugarchfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit = ugarchfit(spec = ug_spec, data = rIBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;fit&amp;lt;/code&amp;gt; is now a list that contains a range of results from the estimation. Let&amp;#039;s have a look at the results&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *          GARCH Model Fit        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## GARCH Model  : sGARCH(1,1)&lt;br /&gt;
## Mean Model   : ARFIMA(1,0,0)&lt;br /&gt;
## Distribution : norm &lt;br /&gt;
## &lt;br /&gt;
## Optimal Parameters&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##         Estimate  Std. Error   t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000220   1.55666  0.11955&lt;br /&gt;
## ar1    -0.013463    0.021425  -0.62835  0.52978&lt;br /&gt;
## omega   0.000015    0.000002   6.56888  0.00000&lt;br /&gt;
## alpha1  0.111158    0.006440  17.25930  0.00000&lt;br /&gt;
## beta1   0.809517    0.005883 137.59775  0.00000&lt;br /&gt;
## &lt;br /&gt;
## Robust Standard Errors:&lt;br /&gt;
##         Estimate  Std. Error  t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000230  1.48654 0.137136&lt;br /&gt;
## ar1    -0.013463    0.019583 -0.68748 0.491782&lt;br /&gt;
## omega   0.000015    0.000012  1.25867 0.208150&lt;br /&gt;
## alpha1  0.111158    0.054637  2.03450 0.041901&lt;br /&gt;
## beta1   0.809517    0.082783  9.77876 0.000000&lt;br /&gt;
## &lt;br /&gt;
## LogLikelihood : 8364.692 &lt;br /&gt;
## &lt;br /&gt;
## Information Criteria&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                     &lt;br /&gt;
## Akaike       -5.8665&lt;br /&gt;
## Bayes        -5.8560&lt;br /&gt;
## Shibata      -5.8665&lt;br /&gt;
## Hannan-Quinn -5.8627&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                    0.03483  0.8519&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][2]   0.03492  1.0000&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][5]   1.39601  0.8712&lt;br /&gt;
## d.o.f=1&lt;br /&gt;
## H0 : No serial correlation&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Squared Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                     0.2509  0.6165&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][5]    1.2795  0.7938&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][9]    1.9518  0.9107&lt;br /&gt;
## d.o.f=2&lt;br /&gt;
## &lt;br /&gt;
## Weighted ARCH LM Tests&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##             Statistic Shape Scale P-Value&lt;br /&gt;
## ARCH Lag[3]     1.295 0.500 2.000  0.2551&lt;br /&gt;
## ARCH Lag[5]     1.603 1.440 1.667  0.5656&lt;br /&gt;
## ARCH Lag[7]     1.935 2.315 1.543  0.7312&lt;br /&gt;
## &lt;br /&gt;
## Nyblom stability test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Joint Statistic:  26.6709&lt;br /&gt;
## Individual Statistics:              &lt;br /&gt;
## mu     0.42613&lt;br /&gt;
## ar1    0.06712&lt;br /&gt;
## omega  0.89209&lt;br /&gt;
## alpha1 0.55216&lt;br /&gt;
## beta1  0.15390&lt;br /&gt;
## &lt;br /&gt;
## Asymptotic Critical Values (10% 5% 1%)&lt;br /&gt;
## Joint Statistic:          1.28 1.47 1.88&lt;br /&gt;
## Individual Statistic:     0.35 0.47 0.75&lt;br /&gt;
## &lt;br /&gt;
## Sign Bias Test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                    t-value   prob sig&lt;br /&gt;
## Sign Bias           0.2134 0.8310    &lt;br /&gt;
## Negative Sign Bias  1.0137 0.3108    &lt;br /&gt;
## Positive Sign Bias  0.4427 0.6580    &lt;br /&gt;
## Joint Effect        1.6909 0.6390    &lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Adjusted Pearson Goodness-of-Fit Test:&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##   group statistic p-value(g-1)&lt;br /&gt;
## 1    20     135.6    1.285e-19&lt;br /&gt;
## 2    30     139.3    2.301e-16&lt;br /&gt;
## 3    40     161.8    6.871e-17&lt;br /&gt;
## 4    50     166.2    1.164e-14&lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Elapsed time : 0.7440431&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are familiar with GARCH models you will recognise some of the parameters. &amp;lt;code&amp;gt;ar1&amp;lt;/code&amp;gt; is the AR1 coefficient of the mean model (here very small and basically insignificant), &amp;lt;code&amp;gt;alpha1&amp;lt;/code&amp;gt; is the coefficient to the squared residuals in the GARCH equation and &amp;lt;code&amp;gt;beta1&amp;lt;/code&amp;gt; is the coefficient to the lagged variance.&lt;br /&gt;
&lt;br /&gt;
Often you will want to use model output for some further analysis. It is therefore important to understand how to extract information such as the parameter estimates, their standard errors or the residuals. The object &amp;lt;code&amp;gt;ugfit&amp;lt;/code&amp;gt; contains all the information. In that object you can find two drawers (or in technical speak slots, @fit and @model). Each of these drawers contains a range of different things. What they contain you can figure out by asking for the element names&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @model slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @model slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@model)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;modelinc&amp;amp;quot;   &amp;amp;quot;modeldesc&amp;amp;quot;  &amp;amp;quot;modeldata&amp;amp;quot;  &amp;amp;quot;pars&amp;amp;quot;       &amp;amp;quot;start.pars&amp;amp;quot;&lt;br /&gt;
##  [6] &amp;amp;quot;fixed.pars&amp;amp;quot; &amp;amp;quot;maxOrder&amp;amp;quot;   &amp;amp;quot;pos.matrix&amp;amp;quot; &amp;amp;quot;fmodel&amp;amp;quot;     &amp;amp;quot;pidx&amp;amp;quot;      &lt;br /&gt;
## [11] &amp;amp;quot;n.start&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @fit slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @fit slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@fit)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;hessian&amp;amp;quot;         &amp;amp;quot;cvar&amp;amp;quot;            &amp;amp;quot;var&amp;amp;quot;            &lt;br /&gt;
##  [4] &amp;amp;quot;sigma&amp;amp;quot;           &amp;amp;quot;condH&amp;amp;quot;           &amp;amp;quot;z&amp;amp;quot;              &lt;br /&gt;
##  [7] &amp;amp;quot;LLH&amp;amp;quot;             &amp;amp;quot;log.likelihoods&amp;amp;quot; &amp;amp;quot;residuals&amp;amp;quot;      &lt;br /&gt;
## [10] &amp;amp;quot;coef&amp;amp;quot;            &amp;amp;quot;robust.cvar&amp;amp;quot;     &amp;amp;quot;A&amp;amp;quot;              &lt;br /&gt;
## [13] &amp;amp;quot;B&amp;amp;quot;               &amp;amp;quot;scores&amp;amp;quot;          &amp;amp;quot;se.coef&amp;amp;quot;        &lt;br /&gt;
## [16] &amp;amp;quot;tval&amp;amp;quot;            &amp;amp;quot;matcoef&amp;amp;quot;         &amp;amp;quot;robust.se.coef&amp;amp;quot; &lt;br /&gt;
## [19] &amp;amp;quot;robust.tval&amp;amp;quot;     &amp;amp;quot;robust.matcoef&amp;amp;quot;  &amp;amp;quot;fitted.values&amp;amp;quot;  &lt;br /&gt;
## [22] &amp;amp;quot;convergence&amp;amp;quot;     &amp;amp;quot;kappa&amp;amp;quot;           &amp;amp;quot;persistence&amp;amp;quot;    &lt;br /&gt;
## [25] &amp;amp;quot;timer&amp;amp;quot;           &amp;amp;quot;ipars&amp;amp;quot;           &amp;amp;quot;solver&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you wanted to extract the estimated coefficients you would do that in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit@fit$coef&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            mu           ar1         omega        alpha1         beta1 &lt;br /&gt;
##  3.419000e-04 -1.346260e-02  1.516946e-05  1.111584e-01  8.095171e-01&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var &amp;amp;lt;- ugfit@fit$var   # save the estimated conditional variances&lt;br /&gt;
ug_res2 &amp;amp;lt;- (ugfit@fit$residuals)^2   # save the estimated squared residuals&amp;lt;/pre&amp;gt;&lt;br /&gt;
Let&amp;#039;s plot the squared residuals and the estimated conditional variance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(ug_res2, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_var, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-16-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Forecasting ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use an estimated model to subsequently forecast the conditional variance. The function used for this purpose is the &amp;lt;code&amp;gt;ugarchforecast&amp;lt;/code&amp;gt; function. The application is rather straightforward:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfore &amp;amp;lt;- ugarchforecast(ugfit, n.ahead = 10)&lt;br /&gt;
ugfore&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## *       GARCH Model Forecast         *&lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## Model: sGARCH&lt;br /&gt;
## Horizon: 10&lt;br /&gt;
## Roll Steps: 0&lt;br /&gt;
## Out of Sample: 0&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast [T0=2018-04-27]:&lt;br /&gt;
##         Series   Sigma&lt;br /&gt;
## T+1  0.0003685 0.01640&lt;br /&gt;
## T+2  0.0003415 0.01621&lt;br /&gt;
## T+3  0.0003419 0.01604&lt;br /&gt;
## T+4  0.0003419 0.01587&lt;br /&gt;
## T+5  0.0003419 0.01572&lt;br /&gt;
## T+6  0.0003419 0.01558&lt;br /&gt;
## T+7  0.0003419 0.01545&lt;br /&gt;
## T+8  0.0003419 0.01533&lt;br /&gt;
## T+9  0.0003419 0.01521&lt;br /&gt;
## T+10 0.0003419 0.01511&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see we have produced forecasts for the next ten days, both for the expected returns (&amp;lt;code&amp;gt;Series&amp;lt;/code&amp;gt;) and for the conditional volatility (square root of the conditional variance). Similar to the object created for model fitting, &amp;lt;code&amp;gt;ugfore&amp;lt;/code&amp;gt; contains two slots (@model and @forecast) and you can use &amp;lt;code&amp;gt;names(ugfore@forecast)&amp;lt;/code&amp;gt; to figure out under which names the elements are saved. For instance you can extract the conditional volatility forecast as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_f &amp;amp;lt;- ugfore@forecast$sigmaFor&lt;br /&gt;
plot(ug_f, type = &amp;amp;quot;l&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-18-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the volatility is the square root of the conditional variance.&lt;br /&gt;
&lt;br /&gt;
To put these forecasts into context let&amp;#039;s display them together with the last 50 observations used in the estimation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var_t &amp;amp;lt;- c(tail(ug_var,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_res2_t &amp;amp;lt;- c(tail(ug_res2,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_f &amp;amp;lt;- c(rep(NA,20),(ug_f)^2)&lt;br /&gt;
&lt;br /&gt;
plot(ug_res2_t, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_f, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
lines(ug_var_t, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-19-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see how the forecast of the conditional variance picks up from the last estimated conditional variance. In fact it decreases from there, slowly, towards the unconditional variance value.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package has a lot of additional functionality which you can explore through the documentation.&lt;br /&gt;
&lt;br /&gt;
= Multivariate GARCH models =&lt;br /&gt;
&lt;br /&gt;
Often you will want to model the volatility of a vector of assets. This can be done with the multivariate equivalent of the univariate GARCH model. Estimating multivariate GARCH models turns out to be significantly more difficult than univariate GARCH models, but fortunately procedures have been developed that deal with most of these issues.&lt;br /&gt;
&lt;br /&gt;
Here we are using the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package which has a lot of useful functionality. We are applying it to estimate a multivariate volatility model for the returns of BP, Google/Alphabet and IBM shares.&lt;br /&gt;
&lt;br /&gt;
As for the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package we first need to specify the model we want to estimate. Here we stick with a Dynamic Conditional Correlation (DCC) model (see the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for details.). When estimating DCC models one basically estimates individual GARCH-type models (which could differ for each individual asset). These are then used to standardise the individual residuals. As a second step one then has to specify the correlation dynamics of these standardised residuals. It is possible to estimate the parameters of the univariate and the correlation model in one big swoop. however, my experience with this, and other packages, is that it is beneficial to separate the two steps.&lt;br /&gt;
&lt;br /&gt;
== Model Setup ==&lt;br /&gt;
&lt;br /&gt;
Here we assume that we are using the same univariate volatility model specification for each of the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# DCC (MVN)&lt;br /&gt;
uspec.n = multispec(replicate(3, ugarchspec(mean.model = list(armaOrder = c(1,0)))))&amp;lt;/pre&amp;gt;&lt;br /&gt;
What does this command do? You will recognise that &amp;lt;code&amp;gt;ugarchspec(mean.model = list(armaOrder = c(1,0)))&amp;lt;/code&amp;gt; specifies an AR(1)-GARCH(1,1) model. By using &amp;lt;code&amp;gt;replicate(3, ugarchspec...)&amp;lt;/code&amp;gt; we replicate this model 3 times (as we have three assets, IBM, Google/Alphabet and BP).&lt;br /&gt;
&lt;br /&gt;
We now estimate these univariate GARCH models using the &amp;lt;code&amp;gt;multifit&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;multf = multifit(uspec.n, rX)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The results are saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; and you can type &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; into the command window to see the estimated parameters for these three models. But we will here proceed to specify the DCC model (I assume that you know what a DCC model is. This is not the place to elaborate on this and many textbooks or indeed the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] to this package provide details). To specify the correlation specification we use the &amp;lt;code&amp;gt;dccspec&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;spec1 = dccspec(uspec = uspec.n, dccOrder = c(1, 1), distribution = &amp;#039;mvnorm&amp;#039;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this specification we have to state how the univariate volatilities are modeled (as per &amp;lt;code&amp;gt;uspec.n&amp;lt;/code&amp;gt;) and how complex the dynamic structure of the correlation matrix is (here we are using the most standard &amp;lt;code&amp;gt;dccOrder = c(1, 1)&amp;lt;/code&amp;gt; specification).&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now we are in a position to estimate the model using the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;fit1 = dccfit(spec1, data = rX, fit.control = list(eval.se = TRUE), fit = multf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
We want to estimate the model as specified in &amp;lt;code&amp;gt;spec1&amp;lt;/code&amp;gt;, using the data in &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt;. The option &amp;lt;code&amp;gt;fit.control = list(eval.se = TRUE)&amp;lt;/code&amp;gt; ensures that the estimation procedure produces standard errors for estimated parameters. Importantly &amp;lt;code&amp;gt;fit = multf&amp;lt;/code&amp;gt; indicates that we ought to use the already estimated univariate models as they were saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt;. The way to learn how to use these functions is by a combination of looking at the functions&amp;#039;s help (&amp;lt;code&amp;gt;?dccfit&amp;lt;/code&amp;gt;) and googling.&lt;br /&gt;
&lt;br /&gt;
When you estimate a multivariate volatility model like the DCC model you are typically interested in the estimated covariance or correlation matrices. After all it is at the core of these models that you allow for time-variation in the correlation between the assets (there are also constant correlation models, but we do not discuss this here). Therefore we will now learn how we extract these.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# Get the model based time varying covariance (arrays) and correlation matrices&lt;br /&gt;
cov1 = rcov(fit1)  # extracts the covariance matrix&lt;br /&gt;
cor1 = rcor(fit1)  # extracts the correlation matrix&amp;lt;/pre&amp;gt;&lt;br /&gt;
To understand the object we have at our hands here we can have a look at the imension:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dim(cor1)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1]    3    3 2850&amp;lt;/pre&amp;gt;&lt;br /&gt;
We get three outputs which tells us that we have a three dimensional object. The firts two dimensions have 3 elements each (think a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; correlation matrix) and then there is a third dimension with 2850 elements. This tells us that &amp;lt;code&amp;gt;cor1&amp;lt;/code&amp;gt; stores 2850 (&amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt;) sorrelation matrices, one for each day of data.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s have a look at the correlation matrix for the last day, day 2853;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor1[,,dim(cor1)[3]]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            rIBM       rBP    rGOOG&lt;br /&gt;
## rIBM  1.0000000 0.2424297 0.353591&lt;br /&gt;
## rBP   0.2424297 1.0000000 0.275244&lt;br /&gt;
## rGOOG 0.3535910 0.2752440 1.000000&amp;lt;/pre&amp;gt;&lt;br /&gt;
So let&amp;#039;s say we want to plot the time-varying correlation between Google and BP, which is 0.275244 on that last day. In our matrix with returns &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt; BP is the second asset and Google the 3rd. So in any particular correlation matrix we want the element in row 2 and column 3.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor_BG &amp;amp;lt;- cor1[2,1,]   # leaving the last dimension empty implies that we want all elements&lt;br /&gt;
cor_BG &amp;amp;lt;- as.xts(cor_BG)  # imposes the xts time series format - useful for plotting&amp;lt;/pre&amp;gt;&lt;br /&gt;
And now we plot this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(cor_BG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-28-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you transformed &amp;lt;code&amp;gt;cor_BG&amp;lt;/code&amp;gt; to be a &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; series the &amp;lt;code&amp;gt;plot&amp;lt;/code&amp;gt; function automatically picks up the date information. As you can see there is significant variation through time with the correaltion typically varying between 0.2 and 0.5.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s plot all three correlations between the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
plot(as.xts(cor1[1,2,]),main=&amp;amp;quot;IBM and BP&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[1,3,]),main=&amp;amp;quot;IBM and Google&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[2,3,]),main=&amp;amp;quot;BP and Google&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-29-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Forecasts ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use your estimated model to produce forecasts for the covariance or correlation matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dccf1 &amp;amp;lt;- dccforecast(fit1, n.ahead = 10)&lt;br /&gt;
dccf1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       DCC GARCH Forecast        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Distribution         :  mvnorm&lt;br /&gt;
## Model                :  DCC(1,1)&lt;br /&gt;
## Horizon              :  10&lt;br /&gt;
## Roll Steps           :  0&lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast: &lt;br /&gt;
## &lt;br /&gt;
## First 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2539 0.3562&lt;br /&gt;
## [2,] 0.2539 1.0000 0.2883&lt;br /&gt;
## [3,] 0.3562 0.2883 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2658 0.3587&lt;br /&gt;
## [2,] 0.2658 1.0000 0.2909&lt;br /&gt;
## [3,] 0.3587 0.2909 1.0000&lt;br /&gt;
## &lt;br /&gt;
## . . .&lt;br /&gt;
## . . .&lt;br /&gt;
## &lt;br /&gt;
## Last 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3202 0.3703&lt;br /&gt;
## [2,] 0.3202 1.0000 0.3027&lt;br /&gt;
## [3,] 0.3703 0.3027 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3250 0.3714&lt;br /&gt;
## [2,] 0.3250 1.0000 0.3037&lt;br /&gt;
## [3,] 0.3714 0.3037 1.0000&amp;lt;/pre&amp;gt;&lt;br /&gt;
The actual forecasts for the correlation can be addresse via&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;Rf &amp;amp;lt;- dccf1@mforecast$R    # use H for the covariance forecast&amp;lt;/pre&amp;gt;&lt;br /&gt;
When checking the structure of &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(Rf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## List of 1&lt;br /&gt;
##  $ : num [1:3, 1:3, 1:10] 1 0.254 0.356 0.254 1 ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
you realise that the object &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt; is a list with one element. It turns out that this one list item is then a 3 dimensional matrix/array which contains the the 10 forecasts of &amp;lt;math&amp;gt;3 \times 3&amp;lt;/math&amp;gt; correlation matrices. If we want to extract, say, the 10 forecasts for the correlation between IBM (1st asset) and BP (2nd asset), we have to do this in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;corf_IB &amp;amp;lt;- Rf[[1]][1,2,]  # Correlation forecasts between IBM and BP&lt;br /&gt;
corf_IG &amp;amp;lt;- Rf[[1]][1,3,]  # Correlation forecasts between IBM and Google&lt;br /&gt;
corf_BG &amp;amp;lt;- Rf[[1]][2,3,]  # Correlation forecasts between BP and Google&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;[[1]]&amp;lt;/code&amp;gt; tells R to go to the first (and here only) list item and then &amp;lt;code&amp;gt;[1,2,]&amp;lt;/code&amp;gt; instructs R to select the (1,2) element of all available correlation matrices.&lt;br /&gt;
&lt;br /&gt;
As for the univariate volatililty model let us display the forecast along with the last in-sample estimates of correlation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
c_IB &amp;amp;lt;- c(tail(cor1[1,2,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IB &amp;amp;lt;- c(rep(NA,20),corf_IB) # gets the 10 forecasts&lt;br /&gt;
plot(c_IB,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and BP&amp;amp;quot;)&lt;br /&gt;
lines(cf_IB,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_IG &amp;amp;lt;- c(tail(cor1[1,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IG &amp;amp;lt;- c(rep(NA,20),corf_IG) # gets the 10 forecasts&lt;br /&gt;
plot(c_IG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_IG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_BG &amp;amp;lt;- c(tail(cor1[2,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_BG &amp;amp;lt;- c(rep(NA,20),corf_BG) # gets the 10 forecasts&lt;br /&gt;
plot(c_BG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation BP and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_BG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-34-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Further thoughts =&lt;br /&gt;
&lt;br /&gt;
If you are looking at using pseudo-out-of sample forecasting (i.e. pretend to forecast values that actually have already occured) you should explore the &amp;lt;code&amp;gt;out.sample&amp;lt;/code&amp;gt; option of the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package also allows you to estimate multivariate factor GARCH models and copula GARCH models (check the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for more details.&lt;br /&gt;
&lt;br /&gt;
An alternative package with a slightly different set of multivariate volatility models is the `&amp;lt;code&amp;gt;ccgarch&amp;lt;/code&amp;gt; package.&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4236</id>
		<title>R GARCH</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4236"/>
				<updated>2018-05-03T23:10:49Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /* Data upload */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
When you are dealing with financial time-series we often have relatively high frequency observations available. It is very common for instance to have daily observations available. In fact it is now possible to obtain hourly, minute, second or even millisecond observations. But here we will restrict ourselves to daily observations. For some assets these will be 7 days a week observations, but for others these will be work-day observations, so typically 5 days a week of observations.&lt;br /&gt;
&lt;br /&gt;
= Packages used =&lt;br /&gt;
&lt;br /&gt;
There are a number of packages that can enable us to estimate volatility models. The packages we will use are the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; for univariate GARCH models and the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; (for multivariate models) package both written by Alexios Ghalanos. We shall also use the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package as it will give us some easy access to some standard financial data.&lt;br /&gt;
&lt;br /&gt;
So please ensure that you install these packes and then load them,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;#install.packages(c(&amp;amp;quot;quantmod&amp;amp;quot;,&amp;amp;quot;rugarch&amp;amp;quot;,&amp;amp;quot;rmgarch&amp;amp;quot;))   # only needed in case you have not yet installed these packages&lt;br /&gt;
library(quantmod)&lt;br /&gt;
library(rugarch)&lt;br /&gt;
library(rmgarch)&amp;lt;/pre&amp;gt;&lt;br /&gt;
Next we set our working directory&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# replace with your directory and uncomment&lt;br /&gt;
# setwd(&amp;amp;quot;YOUR/COPLETE/DIRECTORY/PATH&amp;amp;quot;) &amp;lt;/pre&amp;gt;&lt;br /&gt;
= Data upload =&lt;br /&gt;
&lt;br /&gt;
Here we will use a convenient data retrieval function (&amp;lt;code&amp;gt;getSymbols&amp;lt;/code&amp;gt;) delivered by the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package in order to retrieve some data. This function works, for instance, to retrieve stock data. The default source is [https://finance.yahoo.com/ Yahoo Finance]. If you want to find out what stock has which symbol you should be able to search the internet to find a list of ticker symbols. The following shows how to use the function. But note that my experience is that sometimes the connection does not work and you may get an error message. In that case just retry a few seconds later and it may well work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;startDate = as.Date(&amp;amp;quot;2007-01-03&amp;amp;quot;) #Specify period of time we are interested in&lt;br /&gt;
endDate = as.Date(&amp;amp;quot;2018-04-30&amp;amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
getSymbols(&amp;amp;quot;IBM&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;IBM&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;GOOG&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;GOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;BP&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;BP&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
In your environment you can see that each of these commands loads an object with the respective ticker symbol name. Let&amp;#039;s have a look at one of these dataframes to understand what data these are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;head(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            IBM.Open IBM.High IBM.Low IBM.Close IBM.Volume IBM.Adjusted&lt;br /&gt;
## 2007-01-03    97.18    98.40   96.26     97.27    9196800     73.41806&lt;br /&gt;
## 2007-01-04    97.25    98.79   96.88     98.31   10524500     74.20306&lt;br /&gt;
## 2007-01-05    97.60    97.95   96.91     97.42    7221300     73.53130&lt;br /&gt;
## 2007-01-08    98.50    99.50   98.35     98.90   10340000     74.64834&lt;br /&gt;
## 2007-01-09    99.08   100.33   99.07    100.07   11108200     75.53147&lt;br /&gt;
## 2007-01-10    98.50    99.05   97.93     98.89    8744800     74.64082&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## An &amp;#039;xts&amp;#039; object on 2007-01-03/2018-04-27 containing:&lt;br /&gt;
##   Data: num [1:2850, 1:6] 97.2 97.2 97.6 98.5 99.1 ...&lt;br /&gt;
##  - attr(*, &amp;amp;quot;dimnames&amp;amp;quot;)=List of 2&lt;br /&gt;
##   ..$ : NULL&lt;br /&gt;
##   ..$ : chr [1:6] &amp;amp;quot;IBM.Open&amp;amp;quot; &amp;amp;quot;IBM.High&amp;amp;quot; &amp;amp;quot;IBM.Low&amp;amp;quot; &amp;amp;quot;IBM.Close&amp;amp;quot; ...&lt;br /&gt;
##   Indexed by objects of class: [Date] TZ: UTC&lt;br /&gt;
##   xts Attributes:  &lt;br /&gt;
## List of 2&lt;br /&gt;
##  $ src    : chr &amp;amp;quot;yahoo&amp;amp;quot;&lt;br /&gt;
##  $ updated: POSIXct[1:1], format: &amp;amp;quot;2018-05-03 22:21:00&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can see that this object contains a range of daily observations (&amp;lt;code&amp;gt;Open&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;High&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Close&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Volume&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Adjusted&amp;lt;/code&amp;gt; share price). We also learn that the object is formatted as an &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; object. &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; is a type of time-series format and indeed we learn that the data range from 2007-01-03 to 2018-04-30.&lt;br /&gt;
&lt;br /&gt;
You can in fact produce a somewhat fancy looking chart with the following command (also part of the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;chartSeries(GOOG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GoogleChart1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When we are estimating volatility models we work with returns. There is a function that transforms the data to returns.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;rIBM &amp;amp;lt;- dailyReturn(IBM)&lt;br /&gt;
rBP &amp;amp;lt;- dailyReturn(BP)&lt;br /&gt;
rGOOG &amp;amp;lt;- dailyReturn(GOOG)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# We put all data into a data frame for use in the multivariate model&lt;br /&gt;
rX &amp;amp;lt;- data.frame(rIBM, rBP, rGOOG)&lt;br /&gt;
names(rX)[1] &amp;amp;lt;- &amp;amp;quot;rIBM&amp;amp;quot;&lt;br /&gt;
names(rX)[2] &amp;amp;lt;- &amp;amp;quot;rBP&amp;amp;quot;&lt;br /&gt;
names(rX)[3] &amp;amp;lt;- &amp;amp;quot;rGOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
There is also a &amp;lt;code&amp;gt;weeklyReturn&amp;lt;/code&amp;gt; function in case that is what you are interested in.&lt;br /&gt;
&lt;br /&gt;
= Univariate GARCH Model =&lt;br /&gt;
&lt;br /&gt;
Here we are using the functionality provided by the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package written by Alexios Galanos.&lt;br /&gt;
&lt;br /&gt;
== Model Specification ==&lt;br /&gt;
&lt;br /&gt;
The first thing you need to do is to ensure you know what type of GARCH model you want to estimate and then let R know about this. It is the &amp;lt;code&amp;gt;ugarchspec( )&amp;lt;/code&amp;gt; function which is used to let R know about the model type. There is in fact a default specification and the way to invoke this is as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec = ugarchspec()&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; is now a list which contains all the relevant model specifications. Let&amp;#039;s look at them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       GARCH Model Spec          *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## GARCH Model      : sGARCH(1,1)&lt;br /&gt;
## Variance Targeting   : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Mean Dynamics&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Mean Model       : ARFIMA(1,0,1)&lt;br /&gt;
## Include Mean     : TRUE &lt;br /&gt;
## GARCH-in-Mean        : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Distribution&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Distribution :  norm &lt;br /&gt;
## Includes Skew    :  FALSE &lt;br /&gt;
## Includes Shape   :  FALSE &lt;br /&gt;
## Includes Lambda  :  FALSE&amp;lt;/pre&amp;gt;&lt;br /&gt;
The key issues here are the spec for the &amp;lt;code&amp;gt;Mean Model&amp;lt;/code&amp;gt; (here an ARMA(1,1) model) and the specification for the &amp;lt;code&amp;gt;GARCH Model&amp;lt;/code&amp;gt;, here an &amp;lt;code&amp;gt;sGARCH(1,1)&amp;lt;/code&amp;gt; which is basically a GARCH(1,1). To get details on all the possible specifications and how to change them it is best to consult the [https://cran.r-project.org/web/packages/rugarch/vignettes/Introduction_to_the_rugarch_package.pdf documentation] of the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s say you want to change the mean model from an ARMA(1,1) to an ARMA(1,0), i.e. an AR(1) model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec &amp;amp;lt;- ugarchspec(mean.model=list(armaOrder=c(1,0)))&amp;lt;/pre&amp;gt;&lt;br /&gt;
You could call &amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; again to check that the model specification has actually changed.&lt;br /&gt;
&lt;br /&gt;
The following is the specification for an # an example of the EWMA Model (although we will not se it below).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ewma_spec = ugarchspec(variance.model=list(model=&amp;amp;quot;iGARCH&amp;amp;quot;, garchOrder=c(1,1)), &lt;br /&gt;
        mean.model=list(armaOrder=c(0,0), include.mean=TRUE),  &lt;br /&gt;
        distribution.model=&amp;amp;quot;norm&amp;amp;quot;, fixed.pars=list(omega=0))&amp;lt;/pre&amp;gt;&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now that we have specified a model to estimate we need to find the best arameters, i.e. we need to estimate the model. This step is achieved by the &amp;lt;code&amp;gt;ugarchfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit = ugarchfit(spec = ug_spec, data = rIBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;fit&amp;lt;/code&amp;gt; is now a list that contains a range of results from the estimation. Let&amp;#039;s have a look at the results&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *          GARCH Model Fit        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## GARCH Model  : sGARCH(1,1)&lt;br /&gt;
## Mean Model   : ARFIMA(1,0,0)&lt;br /&gt;
## Distribution : norm &lt;br /&gt;
## &lt;br /&gt;
## Optimal Parameters&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##         Estimate  Std. Error   t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000220   1.55666  0.11955&lt;br /&gt;
## ar1    -0.013463    0.021425  -0.62835  0.52978&lt;br /&gt;
## omega   0.000015    0.000002   6.56888  0.00000&lt;br /&gt;
## alpha1  0.111158    0.006440  17.25930  0.00000&lt;br /&gt;
## beta1   0.809517    0.005883 137.59775  0.00000&lt;br /&gt;
## &lt;br /&gt;
## Robust Standard Errors:&lt;br /&gt;
##         Estimate  Std. Error  t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000230  1.48654 0.137136&lt;br /&gt;
## ar1    -0.013463    0.019583 -0.68748 0.491782&lt;br /&gt;
## omega   0.000015    0.000012  1.25867 0.208150&lt;br /&gt;
## alpha1  0.111158    0.054637  2.03450 0.041901&lt;br /&gt;
## beta1   0.809517    0.082783  9.77876 0.000000&lt;br /&gt;
## &lt;br /&gt;
## LogLikelihood : 8364.692 &lt;br /&gt;
## &lt;br /&gt;
## Information Criteria&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                     &lt;br /&gt;
## Akaike       -5.8665&lt;br /&gt;
## Bayes        -5.8560&lt;br /&gt;
## Shibata      -5.8665&lt;br /&gt;
## Hannan-Quinn -5.8627&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                    0.03483  0.8519&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][2]   0.03492  1.0000&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][5]   1.39601  0.8712&lt;br /&gt;
## d.o.f=1&lt;br /&gt;
## H0 : No serial correlation&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Squared Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                     0.2509  0.6165&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][5]    1.2795  0.7938&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][9]    1.9518  0.9107&lt;br /&gt;
## d.o.f=2&lt;br /&gt;
## &lt;br /&gt;
## Weighted ARCH LM Tests&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##             Statistic Shape Scale P-Value&lt;br /&gt;
## ARCH Lag[3]     1.295 0.500 2.000  0.2551&lt;br /&gt;
## ARCH Lag[5]     1.603 1.440 1.667  0.5656&lt;br /&gt;
## ARCH Lag[7]     1.935 2.315 1.543  0.7312&lt;br /&gt;
## &lt;br /&gt;
## Nyblom stability test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Joint Statistic:  26.6709&lt;br /&gt;
## Individual Statistics:              &lt;br /&gt;
## mu     0.42613&lt;br /&gt;
## ar1    0.06712&lt;br /&gt;
## omega  0.89209&lt;br /&gt;
## alpha1 0.55216&lt;br /&gt;
## beta1  0.15390&lt;br /&gt;
## &lt;br /&gt;
## Asymptotic Critical Values (10% 5% 1%)&lt;br /&gt;
## Joint Statistic:          1.28 1.47 1.88&lt;br /&gt;
## Individual Statistic:     0.35 0.47 0.75&lt;br /&gt;
## &lt;br /&gt;
## Sign Bias Test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                    t-value   prob sig&lt;br /&gt;
## Sign Bias           0.2134 0.8310    &lt;br /&gt;
## Negative Sign Bias  1.0137 0.3108    &lt;br /&gt;
## Positive Sign Bias  0.4427 0.6580    &lt;br /&gt;
## Joint Effect        1.6909 0.6390    &lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Adjusted Pearson Goodness-of-Fit Test:&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##   group statistic p-value(g-1)&lt;br /&gt;
## 1    20     135.6    1.285e-19&lt;br /&gt;
## 2    30     139.3    2.301e-16&lt;br /&gt;
## 3    40     161.8    6.871e-17&lt;br /&gt;
## 4    50     166.2    1.164e-14&lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Elapsed time : 0.7440431&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are familiar with GARCH models you will recognise some of the parameters. &amp;lt;code&amp;gt;ar1&amp;lt;/code&amp;gt; is the AR1 coefficient of the mean model (here very small and basically insignificant), &amp;lt;code&amp;gt;alpha1&amp;lt;/code&amp;gt; is the coefficient to the squared residuals in the GARCH equation and &amp;lt;code&amp;gt;beta1&amp;lt;/code&amp;gt; is the coefficient to the lagged variance.&lt;br /&gt;
&lt;br /&gt;
Often you will want to use model output for some further analysis. It is therefore important to understand how to extract information such as the parameter estimates, their standard errors or the residuals. The object &amp;lt;code&amp;gt;ugfit&amp;lt;/code&amp;gt; contains all the information. In that object you can find two drawers (or in technical speak slots, @fit and @model). Each of these drawers contains a range of different things. What they contain you can figure out by asking for the element names&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @model slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @model slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@model)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;modelinc&amp;amp;quot;   &amp;amp;quot;modeldesc&amp;amp;quot;  &amp;amp;quot;modeldata&amp;amp;quot;  &amp;amp;quot;pars&amp;amp;quot;       &amp;amp;quot;start.pars&amp;amp;quot;&lt;br /&gt;
##  [6] &amp;amp;quot;fixed.pars&amp;amp;quot; &amp;amp;quot;maxOrder&amp;amp;quot;   &amp;amp;quot;pos.matrix&amp;amp;quot; &amp;amp;quot;fmodel&amp;amp;quot;     &amp;amp;quot;pidx&amp;amp;quot;      &lt;br /&gt;
## [11] &amp;amp;quot;n.start&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @fit slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @fit slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@fit)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;hessian&amp;amp;quot;         &amp;amp;quot;cvar&amp;amp;quot;            &amp;amp;quot;var&amp;amp;quot;            &lt;br /&gt;
##  [4] &amp;amp;quot;sigma&amp;amp;quot;           &amp;amp;quot;condH&amp;amp;quot;           &amp;amp;quot;z&amp;amp;quot;              &lt;br /&gt;
##  [7] &amp;amp;quot;LLH&amp;amp;quot;             &amp;amp;quot;log.likelihoods&amp;amp;quot; &amp;amp;quot;residuals&amp;amp;quot;      &lt;br /&gt;
## [10] &amp;amp;quot;coef&amp;amp;quot;            &amp;amp;quot;robust.cvar&amp;amp;quot;     &amp;amp;quot;A&amp;amp;quot;              &lt;br /&gt;
## [13] &amp;amp;quot;B&amp;amp;quot;               &amp;amp;quot;scores&amp;amp;quot;          &amp;amp;quot;se.coef&amp;amp;quot;        &lt;br /&gt;
## [16] &amp;amp;quot;tval&amp;amp;quot;            &amp;amp;quot;matcoef&amp;amp;quot;         &amp;amp;quot;robust.se.coef&amp;amp;quot; &lt;br /&gt;
## [19] &amp;amp;quot;robust.tval&amp;amp;quot;     &amp;amp;quot;robust.matcoef&amp;amp;quot;  &amp;amp;quot;fitted.values&amp;amp;quot;  &lt;br /&gt;
## [22] &amp;amp;quot;convergence&amp;amp;quot;     &amp;amp;quot;kappa&amp;amp;quot;           &amp;amp;quot;persistence&amp;amp;quot;    &lt;br /&gt;
## [25] &amp;amp;quot;timer&amp;amp;quot;           &amp;amp;quot;ipars&amp;amp;quot;           &amp;amp;quot;solver&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you wanted to extract the estimated coefficients you would do that in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit@fit$coef&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            mu           ar1         omega        alpha1         beta1 &lt;br /&gt;
##  3.419000e-04 -1.346260e-02  1.516946e-05  1.111584e-01  8.095171e-01&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var &amp;amp;lt;- ugfit@fit$var   # save the estimated conditional variances&lt;br /&gt;
ug_res2 &amp;amp;lt;- (ugfit@fit$residuals)^2   # save the estimated squared residuals&amp;lt;/pre&amp;gt;&lt;br /&gt;
Let&amp;#039;s plot the squared residuals and the estimated conditional variance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(ug_res2, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_var, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-16-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Forecasting ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use an estimated model to subsequently forecast the conditional variance. The function used for this purpose is the &amp;lt;code&amp;gt;ugarchforecast&amp;lt;/code&amp;gt; function. The application is rather straightforward:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfore &amp;amp;lt;- ugarchforecast(ugfit, n.ahead = 10)&lt;br /&gt;
ugfore&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## *       GARCH Model Forecast         *&lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## Model: sGARCH&lt;br /&gt;
## Horizon: 10&lt;br /&gt;
## Roll Steps: 0&lt;br /&gt;
## Out of Sample: 0&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast [T0=2018-04-27]:&lt;br /&gt;
##         Series   Sigma&lt;br /&gt;
## T+1  0.0003685 0.01640&lt;br /&gt;
## T+2  0.0003415 0.01621&lt;br /&gt;
## T+3  0.0003419 0.01604&lt;br /&gt;
## T+4  0.0003419 0.01587&lt;br /&gt;
## T+5  0.0003419 0.01572&lt;br /&gt;
## T+6  0.0003419 0.01558&lt;br /&gt;
## T+7  0.0003419 0.01545&lt;br /&gt;
## T+8  0.0003419 0.01533&lt;br /&gt;
## T+9  0.0003419 0.01521&lt;br /&gt;
## T+10 0.0003419 0.01511&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see we have produced forecasts for the next ten days, both for the expected returns (&amp;lt;code&amp;gt;Series&amp;lt;/code&amp;gt;) and for the conditional volatility (square root of the conditional variance). Similar to the object created for model fitting, &amp;lt;code&amp;gt;ugfore&amp;lt;/code&amp;gt; contains two slots (@model and @forecast) and you can use &amp;lt;code&amp;gt;names(ugfore@forecast)&amp;lt;/code&amp;gt; to figure out under which names the elements are saved. For instance you can extract the conditional volatility forecast as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_f &amp;amp;lt;- ugfore@forecast$sigmaFor&lt;br /&gt;
plot(ug_f, type = &amp;amp;quot;l&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-18-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the volatility is the square root of the conditional variance.&lt;br /&gt;
&lt;br /&gt;
To put these forecasts into context let&amp;#039;s display them together with the last 50 observations used in the estimation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var_t &amp;amp;lt;- c(tail(ug_var,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_res2_t &amp;amp;lt;- c(tail(ug_res2,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_f &amp;amp;lt;- c(rep(NA,20),(ug_f)^2)&lt;br /&gt;
&lt;br /&gt;
plot(ug_res2_t, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_f, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
lines(ug_var_t, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-19-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see how the forecast of the conditional variance picks up from the last estimated conditional variance. In fact it decreases from there, slowly, towards the unconditional variance value.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package has a lot of additional functionality which you can explore through the documentation.&lt;br /&gt;
&lt;br /&gt;
= Multivariate GARCH models =&lt;br /&gt;
&lt;br /&gt;
Often you will want to model the volatility of a vector of assets. This can be done with the multivariate equivalent of the univariate GARCH model. Estimating multivariate GARCH models turns out to be significantly more difficult than univariate GARCH models, but fortunately procedures have been developed that deal with most of these issues.&lt;br /&gt;
&lt;br /&gt;
Here we are using the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package which has a lot of useful functionality. We are applying it to estimate a multivariate volatility model for the returns of BP, Google/Alphabet and IBM shares.&lt;br /&gt;
&lt;br /&gt;
As for the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package we first need to specify the model we want to estimate. Here we stick with a Dynamic Conditional Correlation (DCC) model (see the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for details.). When estimating DCC models one basically estimates individual GARCH-type models (which could differ for each individual asset). These are then used to standardise the individual residuals. As a second step one then has to specify the correlation dynamics of these standardised residuals. It is possible to estimate the parameters of the univariate and the correlation model in one big swoop. however, my experience with this, and other packages, is that it is beneficial to separate the two steps.&lt;br /&gt;
&lt;br /&gt;
== Model Setup ==&lt;br /&gt;
&lt;br /&gt;
Here we assume that we are using the same univariate volatility model specification for each of the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# DCC (MVN)&lt;br /&gt;
uspec.n = multispec(replicate(3, ugarchspec(mean.model = list(armaOrder = c(1,0)))))&amp;lt;/pre&amp;gt;&lt;br /&gt;
What does this command do? You will recognise that &amp;lt;code&amp;gt;ugarchspec(mean.model = list(armaOrder = c(1,0)))&amp;lt;/code&amp;gt; specifies an AR(1)-GARCH(1,1) model. By using &amp;lt;code&amp;gt;replicate(3, ugarchspec...)&amp;lt;/code&amp;gt; we replicate this model 3 times (as we have three assets, IBM, Google/Alphabet and BP).&lt;br /&gt;
&lt;br /&gt;
We now estimate these univariate GARCH models using the &amp;lt;code&amp;gt;multifit&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;multf = multifit(uspec.n, rX)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The results are saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; and you can type &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; into the command window to see the estimated parameters for these three models. But we will here proceed to specify the DCC model (I assume that you know what a DCC model is. This is not the place to elaborate on this and many textbooks or indeed the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] to this package provide details). To specify the correlation specification we use the &amp;lt;code&amp;gt;dccspec&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;spec1 = dccspec(uspec = uspec.n, dccOrder = c(1, 1), distribution = &amp;#039;mvnorm&amp;#039;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this specification we have to state how the univariate volatilities are modeled (as per &amp;lt;code&amp;gt;uspec.n&amp;lt;/code&amp;gt;) and how complex the dynamic structure of the correlation matrix is (here we are using the most standard &amp;lt;code&amp;gt;dccOrder = c(1, 1)&amp;lt;/code&amp;gt; specification).&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now we are in a position to estimate the model using the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;fit1 = dccfit(spec1, data = rX, fit.control = list(eval.se = TRUE), fit = multf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
We want to estimate the model as specified in &amp;lt;code&amp;gt;spec1&amp;lt;/code&amp;gt;, using the data in &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt;. The option &amp;lt;code&amp;gt;fit.control = list(eval.se = TRUE)&amp;lt;/code&amp;gt; ensures that the estimation procedure produces standard errors for estimated parameters. Importantly &amp;lt;code&amp;gt;fit = multf&amp;lt;/code&amp;gt; indicates that we ought to use the already estimated univariate models as they were saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt;. The way to learn how to use these functions is by a combination of looking at the functions&amp;#039;s help (&amp;lt;code&amp;gt;?dccfit&amp;lt;/code&amp;gt;) and googling.&lt;br /&gt;
&lt;br /&gt;
When you estimate a multivariate volatility model like the DCC model you are typically interested in the estimated covariance or correlation matrices. After all it is at the core of these models that you allow for time-variation in the correlation between the assets (there are also constant correlation models, but we do not discuss this here). Therefore we will now learn how we extract these.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# Get the model based time varying covariance (arrays) and correlation matrices&lt;br /&gt;
cov1 = rcov(fit1)  # extracts the covariance matrix&lt;br /&gt;
cor1 = rcor(fit1)  # extracts the correlation matrix&amp;lt;/pre&amp;gt;&lt;br /&gt;
To understand the object we have at our hands here we can have a look at the imension:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dim(cor1)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1]    3    3 2850&amp;lt;/pre&amp;gt;&lt;br /&gt;
We get three outputs which tells us that we have a three dimensional object. The firts two dimensions have 3 elements each (think a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; correlation matrix) and then there is a third dimension with 2850 elements. This tells us that &amp;lt;code&amp;gt;cor1&amp;lt;/code&amp;gt; stores 2850 (&amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt;) sorrelation matrices, one for each day of data.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s have a look at the correlation matrix for the last day, day 2853;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor1[,,dim(cor1)[3]]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            rIBM       rBP    rGOOG&lt;br /&gt;
## rIBM  1.0000000 0.2424297 0.353591&lt;br /&gt;
## rBP   0.2424297 1.0000000 0.275244&lt;br /&gt;
## rGOOG 0.3535910 0.2752440 1.000000&amp;lt;/pre&amp;gt;&lt;br /&gt;
So let&amp;#039;s say we want to plot the time-varying correlation between Google and BP, which is 0.275244 on that last day. In our matrix with returns &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt; BP is the second asset and Google the 3rd. So in any particular correlation matrix we want the element in row 2 and column 3.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor_BG &amp;amp;lt;- cor1[2,1,]   # leaving the last dimension empty implies that we want all elements&lt;br /&gt;
cor_BG &amp;amp;lt;- as.xts(cor_BG)  # imposes the xts time series format - useful for plotting&amp;lt;/pre&amp;gt;&lt;br /&gt;
And now we plot this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(cor_BG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-28-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you transformed &amp;lt;code&amp;gt;cor_BG&amp;lt;/code&amp;gt; to be a &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; series the &amp;lt;code&amp;gt;plot&amp;lt;/code&amp;gt; function automatically picks up the date information. As you can see there is significant variation through time with the correaltion typically varying between 0.2 and 0.5.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s plot all three correlations between the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
plot(as.xts(cor1[1,2,]),main=&amp;amp;quot;IBM and BP&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[1,3,]),main=&amp;amp;quot;IBM and Google&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[2,3,]),main=&amp;amp;quot;BP and Google&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-29-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Forecasts ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use your estimated model to produce forecasts for the covariance or correlation matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dccf1 &amp;amp;lt;- dccforecast(fit1, n.ahead = 10)&lt;br /&gt;
dccf1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       DCC GARCH Forecast        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Distribution         :  mvnorm&lt;br /&gt;
## Model                :  DCC(1,1)&lt;br /&gt;
## Horizon              :  10&lt;br /&gt;
## Roll Steps           :  0&lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast: &lt;br /&gt;
## &lt;br /&gt;
## First 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2539 0.3562&lt;br /&gt;
## [2,] 0.2539 1.0000 0.2883&lt;br /&gt;
## [3,] 0.3562 0.2883 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2658 0.3587&lt;br /&gt;
## [2,] 0.2658 1.0000 0.2909&lt;br /&gt;
## [3,] 0.3587 0.2909 1.0000&lt;br /&gt;
## &lt;br /&gt;
## . . .&lt;br /&gt;
## . . .&lt;br /&gt;
## &lt;br /&gt;
## Last 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3202 0.3703&lt;br /&gt;
## [2,] 0.3202 1.0000 0.3027&lt;br /&gt;
## [3,] 0.3703 0.3027 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3250 0.3714&lt;br /&gt;
## [2,] 0.3250 1.0000 0.3037&lt;br /&gt;
## [3,] 0.3714 0.3037 1.0000&amp;lt;/pre&amp;gt;&lt;br /&gt;
The actual forecasts for the correlation can be addresse via&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;Rf &amp;amp;lt;- dccf1@mforecast$R    # use H for the covariance forecast&amp;lt;/pre&amp;gt;&lt;br /&gt;
When checking the structure of &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(Rf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## List of 1&lt;br /&gt;
##  $ : num [1:3, 1:3, 1:10] 1 0.254 0.356 0.254 1 ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
you realise that the object &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt; is a list with one element. It turns out that this one list item is then a 3 dimensional matrix/array which contains the the 10 forecasts of &amp;lt;math&amp;gt;3 \times 3&amp;lt;/math&amp;gt; correlation matrices. If we want to extract, say, the 10 forecasts for the correlation between IBM (1st asset) and BP (2nd asset), we have to do this in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;corf_IB &amp;amp;lt;- Rf[[1]][1,2,]  # Correlation forecasts between IBM and BP&lt;br /&gt;
corf_IG &amp;amp;lt;- Rf[[1]][1,3,]  # Correlation forecasts between IBM and Google&lt;br /&gt;
corf_BG &amp;amp;lt;- Rf[[1]][2,3,]  # Correlation forecasts between BP and Google&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;[[1]]&amp;lt;/code&amp;gt; tells R to go to the first (and here only) list item and then &amp;lt;code&amp;gt;[1,2,]&amp;lt;/code&amp;gt; instructs R to select the (1,2) element of all available correlation matrices.&lt;br /&gt;
&lt;br /&gt;
As for the univariate volatililty model let us display the forecast along with the last in-sample estimates of correlation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
c_IB &amp;amp;lt;- c(tail(cor1[1,2,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IB &amp;amp;lt;- c(rep(NA,20),corf_IB) # gets the 10 forecasts&lt;br /&gt;
plot(c_IB,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and BP&amp;amp;quot;)&lt;br /&gt;
lines(cf_IB,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_IG &amp;amp;lt;- c(tail(cor1[1,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IG &amp;amp;lt;- c(rep(NA,20),corf_IG) # gets the 10 forecasts&lt;br /&gt;
plot(c_IG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_IG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_BG &amp;amp;lt;- c(tail(cor1[2,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_BG &amp;amp;lt;- c(rep(NA,20),corf_BG) # gets the 10 forecasts&lt;br /&gt;
plot(c_BG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation BP and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_BG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-34-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Further thoughts =&lt;br /&gt;
&lt;br /&gt;
If you are looking at using pseudo-out-of sample forecasting (i.e. pretend to forecast values that actually have already occured) you should explore the &amp;lt;code&amp;gt;out.sample&amp;lt;/code&amp;gt; option of the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package also allows you to estimate multivariate factor GARCH models and copula GARCH models (check the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for more details.&lt;br /&gt;
&lt;br /&gt;
An alternative package with a slightly different set of multivariate volatility models is the `&amp;lt;code&amp;gt;ccgarch&amp;lt;/code&amp;gt; package.&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:GoogleChart1.png&amp;diff=4235</id>
		<title>File:GoogleChart1.png</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:GoogleChart1.png&amp;diff=4235"/>
				<updated>2018-05-03T23:06:34Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4234</id>
		<title>R GARCH</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4234"/>
				<updated>2018-05-03T23:05:29Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /* Data upload */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
When you are dealing with financial time-series we often have relatively high frequency observations available. It is very common for instance to have daily observations available. In fact it is now possible to obtain hourly, minute, second or even millisecond observations. But here we will restrict ourselves to daily observations. For some assets these will be 7 days a week observations, but for others these will be work-day observations, so typically 5 days a week of observations.&lt;br /&gt;
&lt;br /&gt;
= Packages used =&lt;br /&gt;
&lt;br /&gt;
There are a number of packages that can enable us to estimate volatility models. The packages we will use are the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; for univariate GARCH models and the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; (for multivariate models) package both written by Alexios Ghalanos. We shall also use the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package as it will give us some easy access to some standard financial data.&lt;br /&gt;
&lt;br /&gt;
So please ensure that you install these packes and then load them,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;#install.packages(c(&amp;amp;quot;quantmod&amp;amp;quot;,&amp;amp;quot;rugarch&amp;amp;quot;,&amp;amp;quot;rmgarch&amp;amp;quot;))   # only needed in case you have not yet installed these packages&lt;br /&gt;
library(quantmod)&lt;br /&gt;
library(rugarch)&lt;br /&gt;
library(rmgarch)&amp;lt;/pre&amp;gt;&lt;br /&gt;
Next we set our working directory&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# replace with your directory and uncomment&lt;br /&gt;
# setwd(&amp;amp;quot;YOUR/COPLETE/DIRECTORY/PATH&amp;amp;quot;) &amp;lt;/pre&amp;gt;&lt;br /&gt;
= Data upload =&lt;br /&gt;
&lt;br /&gt;
Here we will use a convenient data retrieval function (&amp;lt;code&amp;gt;getSymbols&amp;lt;/code&amp;gt;) delivered by the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package in order to retrieve some data. This function works, for instance, to retrieve stock data. The default source is [https://finance.yahoo.com/ Yahoo Finance]. If you want to find out what stock has which symbol you should be able to search the internet to find a list of ticker symbols. The following shows how to use the function. But note that my experience is that sometimes the connection does not work and you may get an error message. In that case just retry a few seconds later and it may well work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;startDate = as.Date(&amp;amp;quot;2007-01-03&amp;amp;quot;) #Specify period of time we are interested in&lt;br /&gt;
endDate = as.Date(&amp;amp;quot;2018-04-30&amp;amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
getSymbols(&amp;amp;quot;IBM&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;IBM&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;GOOG&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;GOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;BP&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;BP&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
In your environment you can see that each of these commands loads an object with the respective ticker symbol name. Let&amp;#039;s have a look at one of these dataframes to understand what data these are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;head(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            IBM.Open IBM.High IBM.Low IBM.Close IBM.Volume IBM.Adjusted&lt;br /&gt;
## 2007-01-03    97.18    98.40   96.26     97.27    9196800     73.41806&lt;br /&gt;
## 2007-01-04    97.25    98.79   96.88     98.31   10524500     74.20306&lt;br /&gt;
## 2007-01-05    97.60    97.95   96.91     97.42    7221300     73.53130&lt;br /&gt;
## 2007-01-08    98.50    99.50   98.35     98.90   10340000     74.64834&lt;br /&gt;
## 2007-01-09    99.08   100.33   99.07    100.07   11108200     75.53147&lt;br /&gt;
## 2007-01-10    98.50    99.05   97.93     98.89    8744800     74.64082&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## An &amp;#039;xts&amp;#039; object on 2007-01-03/2018-04-27 containing:&lt;br /&gt;
##   Data: num [1:2850, 1:6] 97.2 97.2 97.6 98.5 99.1 ...&lt;br /&gt;
##  - attr(*, &amp;amp;quot;dimnames&amp;amp;quot;)=List of 2&lt;br /&gt;
##   ..$ : NULL&lt;br /&gt;
##   ..$ : chr [1:6] &amp;amp;quot;IBM.Open&amp;amp;quot; &amp;amp;quot;IBM.High&amp;amp;quot; &amp;amp;quot;IBM.Low&amp;amp;quot; &amp;amp;quot;IBM.Close&amp;amp;quot; ...&lt;br /&gt;
##   Indexed by objects of class: [Date] TZ: UTC&lt;br /&gt;
##   xts Attributes:  &lt;br /&gt;
## List of 2&lt;br /&gt;
##  $ src    : chr &amp;amp;quot;yahoo&amp;amp;quot;&lt;br /&gt;
##  $ updated: POSIXct[1:1], format: &amp;amp;quot;2018-05-03 22:21:00&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can see that this object contains a range of daily observations (&amp;lt;code&amp;gt;Open&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;High&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Close&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Volume&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Adjusted&amp;lt;/code&amp;gt; share price). We also learn that the object is formatted as an &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; object. &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; is a type of time-series format and indeed we learn that the data range from 2007-01-03 to 2018-04-30.&lt;br /&gt;
&lt;br /&gt;
You can in fact produce a somewhat fancy looking chart with the following command (also part of the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;chartSeries(GOOG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/GoogleChart1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When we are estimating volatility models we work with returns. There is a function that transforms the data to returns.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;rIBM &amp;amp;lt;- dailyReturn(IBM)&lt;br /&gt;
rBP &amp;amp;lt;- dailyReturn(BP)&lt;br /&gt;
rGOOG &amp;amp;lt;- dailyReturn(GOOG)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# We put all data into a data frame for use in the multivariate model&lt;br /&gt;
rX &amp;amp;lt;- data.frame(rIBM, rBP, rGOOG)&lt;br /&gt;
names(rX)[1] &amp;amp;lt;- &amp;amp;quot;rIBM&amp;amp;quot;&lt;br /&gt;
names(rX)[2] &amp;amp;lt;- &amp;amp;quot;rBP&amp;amp;quot;&lt;br /&gt;
names(rX)[3] &amp;amp;lt;- &amp;amp;quot;rGOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
There is also a &amp;lt;code&amp;gt;weeklyReturn&amp;lt;/code&amp;gt; function in case that is what you are interested in.&lt;br /&gt;
&lt;br /&gt;
= Univariate GARCH Model =&lt;br /&gt;
&lt;br /&gt;
Here we are using the functionality provided by the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package written by Alexios Galanos.&lt;br /&gt;
&lt;br /&gt;
== Model Specification ==&lt;br /&gt;
&lt;br /&gt;
The first thing you need to do is to ensure you know what type of GARCH model you want to estimate and then let R know about this. It is the &amp;lt;code&amp;gt;ugarchspec( )&amp;lt;/code&amp;gt; function which is used to let R know about the model type. There is in fact a default specification and the way to invoke this is as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec = ugarchspec()&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; is now a list which contains all the relevant model specifications. Let&amp;#039;s look at them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       GARCH Model Spec          *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## GARCH Model      : sGARCH(1,1)&lt;br /&gt;
## Variance Targeting   : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Mean Dynamics&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Mean Model       : ARFIMA(1,0,1)&lt;br /&gt;
## Include Mean     : TRUE &lt;br /&gt;
## GARCH-in-Mean        : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Distribution&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Distribution :  norm &lt;br /&gt;
## Includes Skew    :  FALSE &lt;br /&gt;
## Includes Shape   :  FALSE &lt;br /&gt;
## Includes Lambda  :  FALSE&amp;lt;/pre&amp;gt;&lt;br /&gt;
The key issues here are the spec for the &amp;lt;code&amp;gt;Mean Model&amp;lt;/code&amp;gt; (here an ARMA(1,1) model) and the specification for the &amp;lt;code&amp;gt;GARCH Model&amp;lt;/code&amp;gt;, here an &amp;lt;code&amp;gt;sGARCH(1,1)&amp;lt;/code&amp;gt; which is basically a GARCH(1,1). To get details on all the possible specifications and how to change them it is best to consult the [https://cran.r-project.org/web/packages/rugarch/vignettes/Introduction_to_the_rugarch_package.pdf documentation] of the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s say you want to change the mean model from an ARMA(1,1) to an ARMA(1,0), i.e. an AR(1) model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec &amp;amp;lt;- ugarchspec(mean.model=list(armaOrder=c(1,0)))&amp;lt;/pre&amp;gt;&lt;br /&gt;
You could call &amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; again to check that the model specification has actually changed.&lt;br /&gt;
&lt;br /&gt;
The following is the specification for an # an example of the EWMA Model (although we will not se it below).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ewma_spec = ugarchspec(variance.model=list(model=&amp;amp;quot;iGARCH&amp;amp;quot;, garchOrder=c(1,1)), &lt;br /&gt;
        mean.model=list(armaOrder=c(0,0), include.mean=TRUE),  &lt;br /&gt;
        distribution.model=&amp;amp;quot;norm&amp;amp;quot;, fixed.pars=list(omega=0))&amp;lt;/pre&amp;gt;&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now that we have specified a model to estimate we need to find the best arameters, i.e. we need to estimate the model. This step is achieved by the &amp;lt;code&amp;gt;ugarchfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit = ugarchfit(spec = ug_spec, data = rIBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;fit&amp;lt;/code&amp;gt; is now a list that contains a range of results from the estimation. Let&amp;#039;s have a look at the results&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *          GARCH Model Fit        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## GARCH Model  : sGARCH(1,1)&lt;br /&gt;
## Mean Model   : ARFIMA(1,0,0)&lt;br /&gt;
## Distribution : norm &lt;br /&gt;
## &lt;br /&gt;
## Optimal Parameters&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##         Estimate  Std. Error   t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000220   1.55666  0.11955&lt;br /&gt;
## ar1    -0.013463    0.021425  -0.62835  0.52978&lt;br /&gt;
## omega   0.000015    0.000002   6.56888  0.00000&lt;br /&gt;
## alpha1  0.111158    0.006440  17.25930  0.00000&lt;br /&gt;
## beta1   0.809517    0.005883 137.59775  0.00000&lt;br /&gt;
## &lt;br /&gt;
## Robust Standard Errors:&lt;br /&gt;
##         Estimate  Std. Error  t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000230  1.48654 0.137136&lt;br /&gt;
## ar1    -0.013463    0.019583 -0.68748 0.491782&lt;br /&gt;
## omega   0.000015    0.000012  1.25867 0.208150&lt;br /&gt;
## alpha1  0.111158    0.054637  2.03450 0.041901&lt;br /&gt;
## beta1   0.809517    0.082783  9.77876 0.000000&lt;br /&gt;
## &lt;br /&gt;
## LogLikelihood : 8364.692 &lt;br /&gt;
## &lt;br /&gt;
## Information Criteria&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                     &lt;br /&gt;
## Akaike       -5.8665&lt;br /&gt;
## Bayes        -5.8560&lt;br /&gt;
## Shibata      -5.8665&lt;br /&gt;
## Hannan-Quinn -5.8627&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                    0.03483  0.8519&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][2]   0.03492  1.0000&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][5]   1.39601  0.8712&lt;br /&gt;
## d.o.f=1&lt;br /&gt;
## H0 : No serial correlation&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Squared Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                     0.2509  0.6165&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][5]    1.2795  0.7938&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][9]    1.9518  0.9107&lt;br /&gt;
## d.o.f=2&lt;br /&gt;
## &lt;br /&gt;
## Weighted ARCH LM Tests&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##             Statistic Shape Scale P-Value&lt;br /&gt;
## ARCH Lag[3]     1.295 0.500 2.000  0.2551&lt;br /&gt;
## ARCH Lag[5]     1.603 1.440 1.667  0.5656&lt;br /&gt;
## ARCH Lag[7]     1.935 2.315 1.543  0.7312&lt;br /&gt;
## &lt;br /&gt;
## Nyblom stability test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Joint Statistic:  26.6709&lt;br /&gt;
## Individual Statistics:              &lt;br /&gt;
## mu     0.42613&lt;br /&gt;
## ar1    0.06712&lt;br /&gt;
## omega  0.89209&lt;br /&gt;
## alpha1 0.55216&lt;br /&gt;
## beta1  0.15390&lt;br /&gt;
## &lt;br /&gt;
## Asymptotic Critical Values (10% 5% 1%)&lt;br /&gt;
## Joint Statistic:          1.28 1.47 1.88&lt;br /&gt;
## Individual Statistic:     0.35 0.47 0.75&lt;br /&gt;
## &lt;br /&gt;
## Sign Bias Test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                    t-value   prob sig&lt;br /&gt;
## Sign Bias           0.2134 0.8310    &lt;br /&gt;
## Negative Sign Bias  1.0137 0.3108    &lt;br /&gt;
## Positive Sign Bias  0.4427 0.6580    &lt;br /&gt;
## Joint Effect        1.6909 0.6390    &lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Adjusted Pearson Goodness-of-Fit Test:&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##   group statistic p-value(g-1)&lt;br /&gt;
## 1    20     135.6    1.285e-19&lt;br /&gt;
## 2    30     139.3    2.301e-16&lt;br /&gt;
## 3    40     161.8    6.871e-17&lt;br /&gt;
## 4    50     166.2    1.164e-14&lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Elapsed time : 0.7440431&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are familiar with GARCH models you will recognise some of the parameters. &amp;lt;code&amp;gt;ar1&amp;lt;/code&amp;gt; is the AR1 coefficient of the mean model (here very small and basically insignificant), &amp;lt;code&amp;gt;alpha1&amp;lt;/code&amp;gt; is the coefficient to the squared residuals in the GARCH equation and &amp;lt;code&amp;gt;beta1&amp;lt;/code&amp;gt; is the coefficient to the lagged variance.&lt;br /&gt;
&lt;br /&gt;
Often you will want to use model output for some further analysis. It is therefore important to understand how to extract information such as the parameter estimates, their standard errors or the residuals. The object &amp;lt;code&amp;gt;ugfit&amp;lt;/code&amp;gt; contains all the information. In that object you can find two drawers (or in technical speak slots, @fit and @model). Each of these drawers contains a range of different things. What they contain you can figure out by asking for the element names&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @model slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @model slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@model)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;modelinc&amp;amp;quot;   &amp;amp;quot;modeldesc&amp;amp;quot;  &amp;amp;quot;modeldata&amp;amp;quot;  &amp;amp;quot;pars&amp;amp;quot;       &amp;amp;quot;start.pars&amp;amp;quot;&lt;br /&gt;
##  [6] &amp;amp;quot;fixed.pars&amp;amp;quot; &amp;amp;quot;maxOrder&amp;amp;quot;   &amp;amp;quot;pos.matrix&amp;amp;quot; &amp;amp;quot;fmodel&amp;amp;quot;     &amp;amp;quot;pidx&amp;amp;quot;      &lt;br /&gt;
## [11] &amp;amp;quot;n.start&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @fit slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @fit slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@fit)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;hessian&amp;amp;quot;         &amp;amp;quot;cvar&amp;amp;quot;            &amp;amp;quot;var&amp;amp;quot;            &lt;br /&gt;
##  [4] &amp;amp;quot;sigma&amp;amp;quot;           &amp;amp;quot;condH&amp;amp;quot;           &amp;amp;quot;z&amp;amp;quot;              &lt;br /&gt;
##  [7] &amp;amp;quot;LLH&amp;amp;quot;             &amp;amp;quot;log.likelihoods&amp;amp;quot; &amp;amp;quot;residuals&amp;amp;quot;      &lt;br /&gt;
## [10] &amp;amp;quot;coef&amp;amp;quot;            &amp;amp;quot;robust.cvar&amp;amp;quot;     &amp;amp;quot;A&amp;amp;quot;              &lt;br /&gt;
## [13] &amp;amp;quot;B&amp;amp;quot;               &amp;amp;quot;scores&amp;amp;quot;          &amp;amp;quot;se.coef&amp;amp;quot;        &lt;br /&gt;
## [16] &amp;amp;quot;tval&amp;amp;quot;            &amp;amp;quot;matcoef&amp;amp;quot;         &amp;amp;quot;robust.se.coef&amp;amp;quot; &lt;br /&gt;
## [19] &amp;amp;quot;robust.tval&amp;amp;quot;     &amp;amp;quot;robust.matcoef&amp;amp;quot;  &amp;amp;quot;fitted.values&amp;amp;quot;  &lt;br /&gt;
## [22] &amp;amp;quot;convergence&amp;amp;quot;     &amp;amp;quot;kappa&amp;amp;quot;           &amp;amp;quot;persistence&amp;amp;quot;    &lt;br /&gt;
## [25] &amp;amp;quot;timer&amp;amp;quot;           &amp;amp;quot;ipars&amp;amp;quot;           &amp;amp;quot;solver&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you wanted to extract the estimated coefficients you would do that in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit@fit$coef&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            mu           ar1         omega        alpha1         beta1 &lt;br /&gt;
##  3.419000e-04 -1.346260e-02  1.516946e-05  1.111584e-01  8.095171e-01&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var &amp;amp;lt;- ugfit@fit$var   # save the estimated conditional variances&lt;br /&gt;
ug_res2 &amp;amp;lt;- (ugfit@fit$residuals)^2   # save the estimated squared residuals&amp;lt;/pre&amp;gt;&lt;br /&gt;
Let&amp;#039;s plot the squared residuals and the estimated conditional variance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(ug_res2, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_var, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-16-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Forecasting ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use an estimated model to subsequently forecast the conditional variance. The function used for this purpose is the &amp;lt;code&amp;gt;ugarchforecast&amp;lt;/code&amp;gt; function. The application is rather straightforward:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfore &amp;amp;lt;- ugarchforecast(ugfit, n.ahead = 10)&lt;br /&gt;
ugfore&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## *       GARCH Model Forecast         *&lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## Model: sGARCH&lt;br /&gt;
## Horizon: 10&lt;br /&gt;
## Roll Steps: 0&lt;br /&gt;
## Out of Sample: 0&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast [T0=2018-04-27]:&lt;br /&gt;
##         Series   Sigma&lt;br /&gt;
## T+1  0.0003685 0.01640&lt;br /&gt;
## T+2  0.0003415 0.01621&lt;br /&gt;
## T+3  0.0003419 0.01604&lt;br /&gt;
## T+4  0.0003419 0.01587&lt;br /&gt;
## T+5  0.0003419 0.01572&lt;br /&gt;
## T+6  0.0003419 0.01558&lt;br /&gt;
## T+7  0.0003419 0.01545&lt;br /&gt;
## T+8  0.0003419 0.01533&lt;br /&gt;
## T+9  0.0003419 0.01521&lt;br /&gt;
## T+10 0.0003419 0.01511&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see we have produced forecasts for the next ten days, both for the expected returns (&amp;lt;code&amp;gt;Series&amp;lt;/code&amp;gt;) and for the conditional volatility (square root of the conditional variance). Similar to the object created for model fitting, &amp;lt;code&amp;gt;ugfore&amp;lt;/code&amp;gt; contains two slots (@model and @forecast) and you can use &amp;lt;code&amp;gt;names(ugfore@forecast)&amp;lt;/code&amp;gt; to figure out under which names the elements are saved. For instance you can extract the conditional volatility forecast as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_f &amp;amp;lt;- ugfore@forecast$sigmaFor&lt;br /&gt;
plot(ug_f, type = &amp;amp;quot;l&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-18-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the volatility is the square root of the conditional variance.&lt;br /&gt;
&lt;br /&gt;
To put these forecasts into context let&amp;#039;s display them together with the last 50 observations used in the estimation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var_t &amp;amp;lt;- c(tail(ug_var,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_res2_t &amp;amp;lt;- c(tail(ug_res2,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_f &amp;amp;lt;- c(rep(NA,20),(ug_f)^2)&lt;br /&gt;
&lt;br /&gt;
plot(ug_res2_t, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_f, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
lines(ug_var_t, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-19-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see how the forecast of the conditional variance picks up from the last estimated conditional variance. In fact it decreases from there, slowly, towards the unconditional variance value.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package has a lot of additional functionality which you can explore through the documentation.&lt;br /&gt;
&lt;br /&gt;
= Multivariate GARCH models =&lt;br /&gt;
&lt;br /&gt;
Often you will want to model the volatility of a vector of assets. This can be done with the multivariate equivalent of the univariate GARCH model. Estimating multivariate GARCH models turns out to be significantly more difficult than univariate GARCH models, but fortunately procedures have been developed that deal with most of these issues.&lt;br /&gt;
&lt;br /&gt;
Here we are using the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package which has a lot of useful functionality. We are applying it to estimate a multivariate volatility model for the returns of BP, Google/Alphabet and IBM shares.&lt;br /&gt;
&lt;br /&gt;
As for the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package we first need to specify the model we want to estimate. Here we stick with a Dynamic Conditional Correlation (DCC) model (see the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for details.). When estimating DCC models one basically estimates individual GARCH-type models (which could differ for each individual asset). These are then used to standardise the individual residuals. As a second step one then has to specify the correlation dynamics of these standardised residuals. It is possible to estimate the parameters of the univariate and the correlation model in one big swoop. however, my experience with this, and other packages, is that it is beneficial to separate the two steps.&lt;br /&gt;
&lt;br /&gt;
== Model Setup ==&lt;br /&gt;
&lt;br /&gt;
Here we assume that we are using the same univariate volatility model specification for each of the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# DCC (MVN)&lt;br /&gt;
uspec.n = multispec(replicate(3, ugarchspec(mean.model = list(armaOrder = c(1,0)))))&amp;lt;/pre&amp;gt;&lt;br /&gt;
What does this command do? You will recognise that &amp;lt;code&amp;gt;ugarchspec(mean.model = list(armaOrder = c(1,0)))&amp;lt;/code&amp;gt; specifies an AR(1)-GARCH(1,1) model. By using &amp;lt;code&amp;gt;replicate(3, ugarchspec...)&amp;lt;/code&amp;gt; we replicate this model 3 times (as we have three assets, IBM, Google/Alphabet and BP).&lt;br /&gt;
&lt;br /&gt;
We now estimate these univariate GARCH models using the &amp;lt;code&amp;gt;multifit&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;multf = multifit(uspec.n, rX)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The results are saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; and you can type &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; into the command window to see the estimated parameters for these three models. But we will here proceed to specify the DCC model (I assume that you know what a DCC model is. This is not the place to elaborate on this and many textbooks or indeed the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] to this package provide details). To specify the correlation specification we use the &amp;lt;code&amp;gt;dccspec&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;spec1 = dccspec(uspec = uspec.n, dccOrder = c(1, 1), distribution = &amp;#039;mvnorm&amp;#039;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this specification we have to state how the univariate volatilities are modeled (as per &amp;lt;code&amp;gt;uspec.n&amp;lt;/code&amp;gt;) and how complex the dynamic structure of the correlation matrix is (here we are using the most standard &amp;lt;code&amp;gt;dccOrder = c(1, 1)&amp;lt;/code&amp;gt; specification).&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now we are in a position to estimate the model using the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;fit1 = dccfit(spec1, data = rX, fit.control = list(eval.se = TRUE), fit = multf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
We want to estimate the model as specified in &amp;lt;code&amp;gt;spec1&amp;lt;/code&amp;gt;, using the data in &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt;. The option &amp;lt;code&amp;gt;fit.control = list(eval.se = TRUE)&amp;lt;/code&amp;gt; ensures that the estimation procedure produces standard errors for estimated parameters. Importantly &amp;lt;code&amp;gt;fit = multf&amp;lt;/code&amp;gt; indicates that we ought to use the already estimated univariate models as they were saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt;. The way to learn how to use these functions is by a combination of looking at the functions&amp;#039;s help (&amp;lt;code&amp;gt;?dccfit&amp;lt;/code&amp;gt;) and googling.&lt;br /&gt;
&lt;br /&gt;
When you estimate a multivariate volatility model like the DCC model you are typically interested in the estimated covariance or correlation matrices. After all it is at the core of these models that you allow for time-variation in the correlation between the assets (there are also constant correlation models, but we do not discuss this here). Therefore we will now learn how we extract these.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# Get the model based time varying covariance (arrays) and correlation matrices&lt;br /&gt;
cov1 = rcov(fit1)  # extracts the covariance matrix&lt;br /&gt;
cor1 = rcor(fit1)  # extracts the correlation matrix&amp;lt;/pre&amp;gt;&lt;br /&gt;
To understand the object we have at our hands here we can have a look at the imension:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dim(cor1)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1]    3    3 2850&amp;lt;/pre&amp;gt;&lt;br /&gt;
We get three outputs which tells us that we have a three dimensional object. The firts two dimensions have 3 elements each (think a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; correlation matrix) and then there is a third dimension with 2850 elements. This tells us that &amp;lt;code&amp;gt;cor1&amp;lt;/code&amp;gt; stores 2850 (&amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt;) sorrelation matrices, one for each day of data.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s have a look at the correlation matrix for the last day, day 2853;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor1[,,dim(cor1)[3]]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            rIBM       rBP    rGOOG&lt;br /&gt;
## rIBM  1.0000000 0.2424297 0.353591&lt;br /&gt;
## rBP   0.2424297 1.0000000 0.275244&lt;br /&gt;
## rGOOG 0.3535910 0.2752440 1.000000&amp;lt;/pre&amp;gt;&lt;br /&gt;
So let&amp;#039;s say we want to plot the time-varying correlation between Google and BP, which is 0.275244 on that last day. In our matrix with returns &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt; BP is the second asset and Google the 3rd. So in any particular correlation matrix we want the element in row 2 and column 3.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor_BG &amp;amp;lt;- cor1[2,1,]   # leaving the last dimension empty implies that we want all elements&lt;br /&gt;
cor_BG &amp;amp;lt;- as.xts(cor_BG)  # imposes the xts time series format - useful for plotting&amp;lt;/pre&amp;gt;&lt;br /&gt;
And now we plot this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(cor_BG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-28-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you transformed &amp;lt;code&amp;gt;cor_BG&amp;lt;/code&amp;gt; to be a &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; series the &amp;lt;code&amp;gt;plot&amp;lt;/code&amp;gt; function automatically picks up the date information. As you can see there is significant variation through time with the correaltion typically varying between 0.2 and 0.5.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s plot all three correlations between the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
plot(as.xts(cor1[1,2,]),main=&amp;amp;quot;IBM and BP&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[1,3,]),main=&amp;amp;quot;IBM and Google&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[2,3,]),main=&amp;amp;quot;BP and Google&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-29-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Forecasts ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use your estimated model to produce forecasts for the covariance or correlation matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dccf1 &amp;amp;lt;- dccforecast(fit1, n.ahead = 10)&lt;br /&gt;
dccf1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       DCC GARCH Forecast        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Distribution         :  mvnorm&lt;br /&gt;
## Model                :  DCC(1,1)&lt;br /&gt;
## Horizon              :  10&lt;br /&gt;
## Roll Steps           :  0&lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast: &lt;br /&gt;
## &lt;br /&gt;
## First 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2539 0.3562&lt;br /&gt;
## [2,] 0.2539 1.0000 0.2883&lt;br /&gt;
## [3,] 0.3562 0.2883 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2658 0.3587&lt;br /&gt;
## [2,] 0.2658 1.0000 0.2909&lt;br /&gt;
## [3,] 0.3587 0.2909 1.0000&lt;br /&gt;
## &lt;br /&gt;
## . . .&lt;br /&gt;
## . . .&lt;br /&gt;
## &lt;br /&gt;
## Last 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3202 0.3703&lt;br /&gt;
## [2,] 0.3202 1.0000 0.3027&lt;br /&gt;
## [3,] 0.3703 0.3027 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3250 0.3714&lt;br /&gt;
## [2,] 0.3250 1.0000 0.3037&lt;br /&gt;
## [3,] 0.3714 0.3037 1.0000&amp;lt;/pre&amp;gt;&lt;br /&gt;
The actual forecasts for the correlation can be addresse via&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;Rf &amp;amp;lt;- dccf1@mforecast$R    # use H for the covariance forecast&amp;lt;/pre&amp;gt;&lt;br /&gt;
When checking the structure of &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(Rf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## List of 1&lt;br /&gt;
##  $ : num [1:3, 1:3, 1:10] 1 0.254 0.356 0.254 1 ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
you realise that the object &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt; is a list with one element. It turns out that this one list item is then a 3 dimensional matrix/array which contains the the 10 forecasts of &amp;lt;math&amp;gt;3 \times 3&amp;lt;/math&amp;gt; correlation matrices. If we want to extract, say, the 10 forecasts for the correlation between IBM (1st asset) and BP (2nd asset), we have to do this in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;corf_IB &amp;amp;lt;- Rf[[1]][1,2,]  # Correlation forecasts between IBM and BP&lt;br /&gt;
corf_IG &amp;amp;lt;- Rf[[1]][1,3,]  # Correlation forecasts between IBM and Google&lt;br /&gt;
corf_BG &amp;amp;lt;- Rf[[1]][2,3,]  # Correlation forecasts between BP and Google&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;[[1]]&amp;lt;/code&amp;gt; tells R to go to the first (and here only) list item and then &amp;lt;code&amp;gt;[1,2,]&amp;lt;/code&amp;gt; instructs R to select the (1,2) element of all available correlation matrices.&lt;br /&gt;
&lt;br /&gt;
As for the univariate volatililty model let us display the forecast along with the last in-sample estimates of correlation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
c_IB &amp;amp;lt;- c(tail(cor1[1,2,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IB &amp;amp;lt;- c(rep(NA,20),corf_IB) # gets the 10 forecasts&lt;br /&gt;
plot(c_IB,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and BP&amp;amp;quot;)&lt;br /&gt;
lines(cf_IB,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_IG &amp;amp;lt;- c(tail(cor1[1,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IG &amp;amp;lt;- c(rep(NA,20),corf_IG) # gets the 10 forecasts&lt;br /&gt;
plot(c_IG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_IG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_BG &amp;amp;lt;- c(tail(cor1[2,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_BG &amp;amp;lt;- c(rep(NA,20),corf_BG) # gets the 10 forecasts&lt;br /&gt;
plot(c_BG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation BP and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_BG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-34-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Further thoughts =&lt;br /&gt;
&lt;br /&gt;
If you are looking at using pseudo-out-of sample forecasting (i.e. pretend to forecast values that actually have already occured) you should explore the &amp;lt;code&amp;gt;out.sample&amp;lt;/code&amp;gt; option of the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package also allows you to estimate multivariate factor GARCH models and copula GARCH models (check the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for more details.&lt;br /&gt;
&lt;br /&gt;
An alternative package with a slightly different set of multivariate volatility models is the `&amp;lt;code&amp;gt;ccgarch&amp;lt;/code&amp;gt; package.&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4233</id>
		<title>R GARCH</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=R_GARCH&amp;diff=4233"/>
				<updated>2018-05-03T23:02:24Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: Created page with &amp;quot;= Introduction =  When you are dealing with financial time-series we often have relatively high frequency observations available. It is very common for instance to have daily...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
When you are dealing with financial time-series we often have relatively high frequency observations available. It is very common for instance to have daily observations available. In fact it is now possible to obtain hourly, minute, second or even millisecond observations. But here we will restrict ourselves to daily observations. For some assets these will be 7 days a week observations, but for others these will be work-day observations, so typically 5 days a week of observations.&lt;br /&gt;
&lt;br /&gt;
= Packages used =&lt;br /&gt;
&lt;br /&gt;
There are a number of packages that can enable us to estimate volatility models. The packages we will use are the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; for univariate GARCH models and the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; (for multivariate models) package both written by Alexios Ghalanos. We shall also use the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package as it will give us some easy access to some standard financial data.&lt;br /&gt;
&lt;br /&gt;
So please ensure that you install these packes and then load them,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;#install.packages(c(&amp;amp;quot;quantmod&amp;amp;quot;,&amp;amp;quot;rugarch&amp;amp;quot;,&amp;amp;quot;rmgarch&amp;amp;quot;))   # only needed in case you have not yet installed these packages&lt;br /&gt;
library(quantmod)&lt;br /&gt;
library(rugarch)&lt;br /&gt;
library(rmgarch)&amp;lt;/pre&amp;gt;&lt;br /&gt;
Next we set our working directory&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# replace with your directory and uncomment&lt;br /&gt;
# setwd(&amp;amp;quot;YOUR/COPLETE/DIRECTORY/PATH&amp;amp;quot;) &amp;lt;/pre&amp;gt;&lt;br /&gt;
= Data upload =&lt;br /&gt;
&lt;br /&gt;
Here we will use a convenient data retrieval function (&amp;lt;code&amp;gt;getSymbols&amp;lt;/code&amp;gt;) delivered by the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package in order to retrieve some data. This function works, for instance, to retrieve stock data. The default source is [https://finance.yahoo.com/ Yahoo Finance]. If you want to find out what stock has which symbol you should be able to search the internet to find a list of ticker symbols. The following shows how to use the function. But note that my experience is that sometimes the connection does not work and you may get an error message. In that case just retry a few seconds later and it may well work.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;startDate = as.Date(&amp;amp;quot;2007-01-03&amp;amp;quot;) #Specify period of time we are interested in&lt;br /&gt;
endDate = as.Date(&amp;amp;quot;2018-04-30&amp;amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
getSymbols(&amp;amp;quot;IBM&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;IBM&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;GOOG&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;GOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;getSymbols(&amp;amp;quot;BP&amp;amp;quot;, from = startDate, to = endDate)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;BP&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
In your environment you can see that each of these commands loads an object with the respective ticker symbol name. Let&amp;#039;s have a look at one of these dataframes to understand what data these are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;head(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            IBM.Open IBM.High IBM.Low IBM.Close IBM.Volume IBM.Adjusted&lt;br /&gt;
## 2007-01-03    97.18    98.40   96.26     97.27    9196800     73.41806&lt;br /&gt;
## 2007-01-04    97.25    98.79   96.88     98.31   10524500     74.20306&lt;br /&gt;
## 2007-01-05    97.60    97.95   96.91     97.42    7221300     73.53130&lt;br /&gt;
## 2007-01-08    98.50    99.50   98.35     98.90   10340000     74.64834&lt;br /&gt;
## 2007-01-09    99.08   100.33   99.07    100.07   11108200     75.53147&lt;br /&gt;
## 2007-01-10    98.50    99.05   97.93     98.89    8744800     74.64082&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(IBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## An &amp;#039;xts&amp;#039; object on 2007-01-03/2018-04-27 containing:&lt;br /&gt;
##   Data: num [1:2850, 1:6] 97.2 97.2 97.6 98.5 99.1 ...&lt;br /&gt;
##  - attr(*, &amp;amp;quot;dimnames&amp;amp;quot;)=List of 2&lt;br /&gt;
##   ..$ : NULL&lt;br /&gt;
##   ..$ : chr [1:6] &amp;amp;quot;IBM.Open&amp;amp;quot; &amp;amp;quot;IBM.High&amp;amp;quot; &amp;amp;quot;IBM.Low&amp;amp;quot; &amp;amp;quot;IBM.Close&amp;amp;quot; ...&lt;br /&gt;
##   Indexed by objects of class: [Date] TZ: UTC&lt;br /&gt;
##   xts Attributes:  &lt;br /&gt;
## List of 2&lt;br /&gt;
##  $ src    : chr &amp;amp;quot;yahoo&amp;amp;quot;&lt;br /&gt;
##  $ updated: POSIXct[1:1], format: &amp;amp;quot;2018-05-03 22:21:00&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can see that this object contains a range of daily observations (&amp;lt;code&amp;gt;Open&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;High&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Close&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Volume&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Adjusted&amp;lt;/code&amp;gt; share price). We also learn that the object is formatted as an &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; object. &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; is a type of time-series format and indeed we learn that the data range from 2007-01-03 to 2018-04-30.&lt;br /&gt;
&lt;br /&gt;
You can in fact produce a somewhat fancy looking chart with the following command (also part of the &amp;lt;code&amp;gt;quantmod&amp;lt;/code&amp;gt; package)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;chartSeries(GOOG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-6-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When we are estimating volatility models we work with returns. There is a function that transforms the data to returns.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;rIBM &amp;amp;lt;- dailyReturn(IBM)&lt;br /&gt;
rBP &amp;amp;lt;- dailyReturn(BP)&lt;br /&gt;
rGOOG &amp;amp;lt;- dailyReturn(GOOG)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# We put all data into a data frame for use in the multivariate model&lt;br /&gt;
rX &amp;amp;lt;- data.frame(rIBM, rBP, rGOOG)&lt;br /&gt;
names(rX)[1] &amp;amp;lt;- &amp;amp;quot;rIBM&amp;amp;quot;&lt;br /&gt;
names(rX)[2] &amp;amp;lt;- &amp;amp;quot;rBP&amp;amp;quot;&lt;br /&gt;
names(rX)[3] &amp;amp;lt;- &amp;amp;quot;rGOOG&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
There is also a &amp;lt;code&amp;gt;weeklyReturn&amp;lt;/code&amp;gt; function in case that is what you are interested in.&lt;br /&gt;
&lt;br /&gt;
= Univariate GARCH Model =&lt;br /&gt;
&lt;br /&gt;
Here we are using the functionality provided by the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package written by Alexios Galanos.&lt;br /&gt;
&lt;br /&gt;
== Model Specification ==&lt;br /&gt;
&lt;br /&gt;
The first thing you need to do is to ensure you know what type of GARCH model you want to estimate and then let R know about this. It is the &amp;lt;code&amp;gt;ugarchspec( )&amp;lt;/code&amp;gt; function which is used to let R know about the model type. There is in fact a default specification and the way to invoke this is as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec = ugarchspec()&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; is now a list which contains all the relevant model specifications. Let&amp;#039;s look at them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       GARCH Model Spec          *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## GARCH Model      : sGARCH(1,1)&lt;br /&gt;
## Variance Targeting   : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Mean Dynamics&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Mean Model       : ARFIMA(1,0,1)&lt;br /&gt;
## Include Mean     : TRUE &lt;br /&gt;
## GARCH-in-Mean        : FALSE &lt;br /&gt;
## &lt;br /&gt;
## Conditional Distribution&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Distribution :  norm &lt;br /&gt;
## Includes Skew    :  FALSE &lt;br /&gt;
## Includes Shape   :  FALSE &lt;br /&gt;
## Includes Lambda  :  FALSE&amp;lt;/pre&amp;gt;&lt;br /&gt;
The key issues here are the spec for the &amp;lt;code&amp;gt;Mean Model&amp;lt;/code&amp;gt; (here an ARMA(1,1) model) and the specification for the &amp;lt;code&amp;gt;GARCH Model&amp;lt;/code&amp;gt;, here an &amp;lt;code&amp;gt;sGARCH(1,1)&amp;lt;/code&amp;gt; which is basically a GARCH(1,1). To get details on all the possible specifications and how to change them it is best to consult the [https://cran.r-project.org/web/packages/rugarch/vignettes/Introduction_to_the_rugarch_package.pdf documentation] of the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s say you want to change the mean model from an ARMA(1,1) to an ARMA(1,0), i.e. an AR(1) model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_spec &amp;amp;lt;- ugarchspec(mean.model=list(armaOrder=c(1,0)))&amp;lt;/pre&amp;gt;&lt;br /&gt;
You could call &amp;lt;code&amp;gt;ug_spec&amp;lt;/code&amp;gt; again to check that the model specification has actually changed.&lt;br /&gt;
&lt;br /&gt;
The following is the specification for an # an example of the EWMA Model (although we will not se it below).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ewma_spec = ugarchspec(variance.model=list(model=&amp;amp;quot;iGARCH&amp;amp;quot;, garchOrder=c(1,1)), &lt;br /&gt;
        mean.model=list(armaOrder=c(0,0), include.mean=TRUE),  &lt;br /&gt;
        distribution.model=&amp;amp;quot;norm&amp;amp;quot;, fixed.pars=list(omega=0))&amp;lt;/pre&amp;gt;&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now that we have specified a model to estimate we need to find the best arameters, i.e. we need to estimate the model. This step is achieved by the &amp;lt;code&amp;gt;ugarchfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit = ugarchfit(spec = ug_spec, data = rIBM)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;fit&amp;lt;/code&amp;gt; is now a list that contains a range of results from the estimation. Let&amp;#039;s have a look at the results&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *          GARCH Model Fit        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Conditional Variance Dynamics    &lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## GARCH Model  : sGARCH(1,1)&lt;br /&gt;
## Mean Model   : ARFIMA(1,0,0)&lt;br /&gt;
## Distribution : norm &lt;br /&gt;
## &lt;br /&gt;
## Optimal Parameters&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##         Estimate  Std. Error   t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000220   1.55666  0.11955&lt;br /&gt;
## ar1    -0.013463    0.021425  -0.62835  0.52978&lt;br /&gt;
## omega   0.000015    0.000002   6.56888  0.00000&lt;br /&gt;
## alpha1  0.111158    0.006440  17.25930  0.00000&lt;br /&gt;
## beta1   0.809517    0.005883 137.59775  0.00000&lt;br /&gt;
## &lt;br /&gt;
## Robust Standard Errors:&lt;br /&gt;
##         Estimate  Std. Error  t value Pr(&amp;amp;gt;|t|)&lt;br /&gt;
## mu      0.000342    0.000230  1.48654 0.137136&lt;br /&gt;
## ar1    -0.013463    0.019583 -0.68748 0.491782&lt;br /&gt;
## omega   0.000015    0.000012  1.25867 0.208150&lt;br /&gt;
## alpha1  0.111158    0.054637  2.03450 0.041901&lt;br /&gt;
## beta1   0.809517    0.082783  9.77876 0.000000&lt;br /&gt;
## &lt;br /&gt;
## LogLikelihood : 8364.692 &lt;br /&gt;
## &lt;br /&gt;
## Information Criteria&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                     &lt;br /&gt;
## Akaike       -5.8665&lt;br /&gt;
## Bayes        -5.8560&lt;br /&gt;
## Shibata      -5.8665&lt;br /&gt;
## Hannan-Quinn -5.8627&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                    0.03483  0.8519&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][2]   0.03492  1.0000&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][5]   1.39601  0.8712&lt;br /&gt;
## d.o.f=1&lt;br /&gt;
## H0 : No serial correlation&lt;br /&gt;
## &lt;br /&gt;
## Weighted Ljung-Box Test on Standardized Squared Residuals&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                         statistic p-value&lt;br /&gt;
## Lag[1]                     0.2509  0.6165&lt;br /&gt;
## Lag[2*(p+q)+(p+q)-1][5]    1.2795  0.7938&lt;br /&gt;
## Lag[4*(p+q)+(p+q)-1][9]    1.9518  0.9107&lt;br /&gt;
## d.o.f=2&lt;br /&gt;
## &lt;br /&gt;
## Weighted ARCH LM Tests&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##             Statistic Shape Scale P-Value&lt;br /&gt;
## ARCH Lag[3]     1.295 0.500 2.000  0.2551&lt;br /&gt;
## ARCH Lag[5]     1.603 1.440 1.667  0.5656&lt;br /&gt;
## ARCH Lag[7]     1.935 2.315 1.543  0.7312&lt;br /&gt;
## &lt;br /&gt;
## Nyblom stability test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
## Joint Statistic:  26.6709&lt;br /&gt;
## Individual Statistics:              &lt;br /&gt;
## mu     0.42613&lt;br /&gt;
## ar1    0.06712&lt;br /&gt;
## omega  0.89209&lt;br /&gt;
## alpha1 0.55216&lt;br /&gt;
## beta1  0.15390&lt;br /&gt;
## &lt;br /&gt;
## Asymptotic Critical Values (10% 5% 1%)&lt;br /&gt;
## Joint Statistic:          1.28 1.47 1.88&lt;br /&gt;
## Individual Statistic:     0.35 0.47 0.75&lt;br /&gt;
## &lt;br /&gt;
## Sign Bias Test&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##                    t-value   prob sig&lt;br /&gt;
## Sign Bias           0.2134 0.8310    &lt;br /&gt;
## Negative Sign Bias  1.0137 0.3108    &lt;br /&gt;
## Positive Sign Bias  0.4427 0.6580    &lt;br /&gt;
## Joint Effect        1.6909 0.6390    &lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Adjusted Pearson Goodness-of-Fit Test:&lt;br /&gt;
## ------------------------------------&lt;br /&gt;
##   group statistic p-value(g-1)&lt;br /&gt;
## 1    20     135.6    1.285e-19&lt;br /&gt;
## 2    30     139.3    2.301e-16&lt;br /&gt;
## 3    40     161.8    6.871e-17&lt;br /&gt;
## 4    50     166.2    1.164e-14&lt;br /&gt;
## &lt;br /&gt;
## &lt;br /&gt;
## Elapsed time : 0.7440431&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you are familiar with GARCH models you will recognise some of the parameters. &amp;lt;code&amp;gt;ar1&amp;lt;/code&amp;gt; is the AR1 coefficient of the mean model (here very small and basically insignificant), &amp;lt;code&amp;gt;alpha1&amp;lt;/code&amp;gt; is the coefficient to the squared residuals in the GARCH equation and &amp;lt;code&amp;gt;beta1&amp;lt;/code&amp;gt; is the coefficient to the lagged variance.&lt;br /&gt;
&lt;br /&gt;
Often you will want to use model output for some further analysis. It is therefore important to understand how to extract information such as the parameter estimates, their standard errors or the residuals. The object &amp;lt;code&amp;gt;ugfit&amp;lt;/code&amp;gt; contains all the information. In that object you can find two drawers (or in technical speak slots, @fit and @model). Each of these drawers contains a range of different things. What they contain you can figure out by asking for the element names&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @model slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @model slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@model)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;modelinc&amp;amp;quot;   &amp;amp;quot;modeldesc&amp;amp;quot;  &amp;amp;quot;modeldata&amp;amp;quot;  &amp;amp;quot;pars&amp;amp;quot;       &amp;amp;quot;start.pars&amp;amp;quot;&lt;br /&gt;
##  [6] &amp;amp;quot;fixed.pars&amp;amp;quot; &amp;amp;quot;maxOrder&amp;amp;quot;   &amp;amp;quot;pos.matrix&amp;amp;quot; &amp;amp;quot;fmodel&amp;amp;quot;     &amp;amp;quot;pidx&amp;amp;quot;      &lt;br /&gt;
## [11] &amp;amp;quot;n.start&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;paste(&amp;amp;quot;Elements in the @fit slot&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Elements in the @fit slot&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;names(ugfit@fit)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##  [1] &amp;amp;quot;hessian&amp;amp;quot;         &amp;amp;quot;cvar&amp;amp;quot;            &amp;amp;quot;var&amp;amp;quot;            &lt;br /&gt;
##  [4] &amp;amp;quot;sigma&amp;amp;quot;           &amp;amp;quot;condH&amp;amp;quot;           &amp;amp;quot;z&amp;amp;quot;              &lt;br /&gt;
##  [7] &amp;amp;quot;LLH&amp;amp;quot;             &amp;amp;quot;log.likelihoods&amp;amp;quot; &amp;amp;quot;residuals&amp;amp;quot;      &lt;br /&gt;
## [10] &amp;amp;quot;coef&amp;amp;quot;            &amp;amp;quot;robust.cvar&amp;amp;quot;     &amp;amp;quot;A&amp;amp;quot;              &lt;br /&gt;
## [13] &amp;amp;quot;B&amp;amp;quot;               &amp;amp;quot;scores&amp;amp;quot;          &amp;amp;quot;se.coef&amp;amp;quot;        &lt;br /&gt;
## [16] &amp;amp;quot;tval&amp;amp;quot;            &amp;amp;quot;matcoef&amp;amp;quot;         &amp;amp;quot;robust.se.coef&amp;amp;quot; &lt;br /&gt;
## [19] &amp;amp;quot;robust.tval&amp;amp;quot;     &amp;amp;quot;robust.matcoef&amp;amp;quot;  &amp;amp;quot;fitted.values&amp;amp;quot;  &lt;br /&gt;
## [22] &amp;amp;quot;convergence&amp;amp;quot;     &amp;amp;quot;kappa&amp;amp;quot;           &amp;amp;quot;persistence&amp;amp;quot;    &lt;br /&gt;
## [25] &amp;amp;quot;timer&amp;amp;quot;           &amp;amp;quot;ipars&amp;amp;quot;           &amp;amp;quot;solver&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you wanted to extract the estimated coefficients you would do that in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfit@fit$coef&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            mu           ar1         omega        alpha1         beta1 &lt;br /&gt;
##  3.419000e-04 -1.346260e-02  1.516946e-05  1.111584e-01  8.095171e-01&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var &amp;amp;lt;- ugfit@fit$var   # save the estimated conditional variances&lt;br /&gt;
ug_res2 &amp;amp;lt;- (ugfit@fit$residuals)^2   # save the estimated squared residuals&amp;lt;/pre&amp;gt;&lt;br /&gt;
Let&amp;#039;s plot the squared residuals and the estimated conditional variance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(ug_res2, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_var, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-16-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model Forecasting ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use an estimated model to subsequently forecast the conditional variance. The function used for this purpose is the &amp;lt;code&amp;gt;ugarchforecast&amp;lt;/code&amp;gt; function. The application is rather straightforward:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ugfore &amp;amp;lt;- ugarchforecast(ugfit, n.ahead = 10)&lt;br /&gt;
ugfore&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## *       GARCH Model Forecast         *&lt;br /&gt;
## *------------------------------------*&lt;br /&gt;
## Model: sGARCH&lt;br /&gt;
## Horizon: 10&lt;br /&gt;
## Roll Steps: 0&lt;br /&gt;
## Out of Sample: 0&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast [T0=2018-04-27]:&lt;br /&gt;
##         Series   Sigma&lt;br /&gt;
## T+1  0.0003685 0.01640&lt;br /&gt;
## T+2  0.0003415 0.01621&lt;br /&gt;
## T+3  0.0003419 0.01604&lt;br /&gt;
## T+4  0.0003419 0.01587&lt;br /&gt;
## T+5  0.0003419 0.01572&lt;br /&gt;
## T+6  0.0003419 0.01558&lt;br /&gt;
## T+7  0.0003419 0.01545&lt;br /&gt;
## T+8  0.0003419 0.01533&lt;br /&gt;
## T+9  0.0003419 0.01521&lt;br /&gt;
## T+10 0.0003419 0.01511&amp;lt;/pre&amp;gt;&lt;br /&gt;
As you can see we have produced forecasts for the next ten days, both for the expected returns (&amp;lt;code&amp;gt;Series&amp;lt;/code&amp;gt;) and for the conditional volatility (square root of the conditional variance). Similar to the object created for model fitting, &amp;lt;code&amp;gt;ugfore&amp;lt;/code&amp;gt; contains two slots (@model and @forecast) and you can use &amp;lt;code&amp;gt;names(ugfore@forecast)&amp;lt;/code&amp;gt; to figure out under which names the elements are saved. For instance you can extract the conditional volatility forecast as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_f &amp;amp;lt;- ugfore@forecast$sigmaFor&lt;br /&gt;
plot(ug_f, type = &amp;amp;quot;l&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-18-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the volatility is the square root of the conditional variance.&lt;br /&gt;
&lt;br /&gt;
To put these forecasts into context let&amp;#039;s display them together with the last 50 observations used in the estimation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;ug_var_t &amp;amp;lt;- c(tail(ug_var,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_res2_t &amp;amp;lt;- c(tail(ug_res2,20),rep(NA,10))  # gets the last 20 observations&lt;br /&gt;
ug_f &amp;amp;lt;- c(rep(NA,20),(ug_f)^2)&lt;br /&gt;
&lt;br /&gt;
plot(ug_res2_t, type = &amp;amp;quot;l&amp;amp;quot;)&lt;br /&gt;
lines(ug_f, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
lines(ug_var_t, col = &amp;amp;quot;green&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-19-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see how the forecast of the conditional variance picks up from the last estimated conditional variance. In fact it decreases from there, slowly, towards the unconditional variance value.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package has a lot of additional functionality which you can explore through the documentation.&lt;br /&gt;
&lt;br /&gt;
= Multivariate GARCH models =&lt;br /&gt;
&lt;br /&gt;
Often you will want to model the volatility of a vector of assets. This can be done with the multivariate equivalent of the univariate GARCH model. Estimating multivariate GARCH models turns out to be significantly more difficult than univariate GARCH models, but fortunately procedures have been developed that deal with most of these issues.&lt;br /&gt;
&lt;br /&gt;
Here we are using the &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package which has a lot of useful functionality. We are applying it to estimate a multivariate volatility model for the returns of BP, Google/Alphabet and IBM shares.&lt;br /&gt;
&lt;br /&gt;
As for the &amp;lt;code&amp;gt;rugarch&amp;lt;/code&amp;gt; package we first need to specify the model we want to estimate. Here we stick with a Dynamic Conditional Correlation (DCC) model (see the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for details.). When estimating DCC models one basically estimates individual GARCH-type models (which could differ for each individual asset). These are then used to standardise the individual residuals. As a second step one then has to specify the correlation dynamics of these standardised residuals. It is possible to estimate the parameters of the univariate and the correlation model in one big swoop. however, my experience with this, and other packages, is that it is beneficial to separate the two steps.&lt;br /&gt;
&lt;br /&gt;
== Model Setup ==&lt;br /&gt;
&lt;br /&gt;
Here we assume that we are using the same univariate volatility model specification for each of the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# DCC (MVN)&lt;br /&gt;
uspec.n = multispec(replicate(3, ugarchspec(mean.model = list(armaOrder = c(1,0)))))&amp;lt;/pre&amp;gt;&lt;br /&gt;
What does this command do? You will recognise that &amp;lt;code&amp;gt;ugarchspec(mean.model = list(armaOrder = c(1,0)))&amp;lt;/code&amp;gt; specifies an AR(1)-GARCH(1,1) model. By using &amp;lt;code&amp;gt;replicate(3, ugarchspec...)&amp;lt;/code&amp;gt; we replicate this model 3 times (as we have three assets, IBM, Google/Alphabet and BP).&lt;br /&gt;
&lt;br /&gt;
We now estimate these univariate GARCH models using the &amp;lt;code&amp;gt;multifit&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;multf = multifit(uspec.n, rX)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The results are saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; and you can type &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt; into the command window to see the estimated parameters for these three models. But we will here proceed to specify the DCC model (I assume that you know what a DCC model is. This is not the place to elaborate on this and many textbooks or indeed the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] to this package provide details). To specify the correlation specification we use the &amp;lt;code&amp;gt;dccspec&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;spec1 = dccspec(uspec = uspec.n, dccOrder = c(1, 1), distribution = &amp;#039;mvnorm&amp;#039;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this specification we have to state how the univariate volatilities are modeled (as per &amp;lt;code&amp;gt;uspec.n&amp;lt;/code&amp;gt;) and how complex the dynamic structure of the correlation matrix is (here we are using the most standard &amp;lt;code&amp;gt;dccOrder = c(1, 1)&amp;lt;/code&amp;gt; specification).&lt;br /&gt;
&lt;br /&gt;
== Model Estimation ==&lt;br /&gt;
&lt;br /&gt;
Now we are in a position to estimate the model using the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;fit1 = dccfit(spec1, data = rX, fit.control = list(eval.se = TRUE), fit = multf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
We want to estimate the model as specified in &amp;lt;code&amp;gt;spec1&amp;lt;/code&amp;gt;, using the data in &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt;. The option &amp;lt;code&amp;gt;fit.control = list(eval.se = TRUE)&amp;lt;/code&amp;gt; ensures that the estimation procedure produces standard errors for estimated parameters. Importantly &amp;lt;code&amp;gt;fit = multf&amp;lt;/code&amp;gt; indicates that we ought to use the already estimated univariate models as they were saved in &amp;lt;code&amp;gt;multf&amp;lt;/code&amp;gt;. The way to learn how to use these functions is by a combination of looking at the functions&amp;#039;s help (&amp;lt;code&amp;gt;?dccfit&amp;lt;/code&amp;gt;) and googling.&lt;br /&gt;
&lt;br /&gt;
When you estimate a multivariate volatility model like the DCC model you are typically interested in the estimated covariance or correlation matrices. After all it is at the core of these models that you allow for time-variation in the correlation between the assets (there are also constant correlation models, but we do not discuss this here). Therefore we will now learn how we extract these.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# Get the model based time varying covariance (arrays) and correlation matrices&lt;br /&gt;
cov1 = rcov(fit1)  # extracts the covariance matrix&lt;br /&gt;
cor1 = rcor(fit1)  # extracts the correlation matrix&amp;lt;/pre&amp;gt;&lt;br /&gt;
To understand the object we have at our hands here we can have a look at the imension:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dim(cor1)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1]    3    3 2850&amp;lt;/pre&amp;gt;&lt;br /&gt;
We get three outputs which tells us that we have a three dimensional object. The firts two dimensions have 3 elements each (think a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; correlation matrix) and then there is a third dimension with 2850 elements. This tells us that &amp;lt;code&amp;gt;cor1&amp;lt;/code&amp;gt; stores 2850 (&amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt;) sorrelation matrices, one for each day of data.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s have a look at the correlation matrix for the last day, day 2853;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor1[,,dim(cor1)[3]]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##            rIBM       rBP    rGOOG&lt;br /&gt;
## rIBM  1.0000000 0.2424297 0.353591&lt;br /&gt;
## rBP   0.2424297 1.0000000 0.275244&lt;br /&gt;
## rGOOG 0.3535910 0.2752440 1.000000&amp;lt;/pre&amp;gt;&lt;br /&gt;
So let&amp;#039;s say we want to plot the time-varying correlation between Google and BP, which is 0.275244 on that last day. In our matrix with returns &amp;lt;code&amp;gt;rX&amp;lt;/code&amp;gt; BP is the second asset and Google the 3rd. So in any particular correlation matrix we want the element in row 2 and column 3.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;cor_BG &amp;amp;lt;- cor1[2,1,]   # leaving the last dimension empty implies that we want all elements&lt;br /&gt;
cor_BG &amp;amp;lt;- as.xts(cor_BG)  # imposes the xts time series format - useful for plotting&amp;lt;/pre&amp;gt;&lt;br /&gt;
And now we plot this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(cor_BG)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-28-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you transformed &amp;lt;code&amp;gt;cor_BG&amp;lt;/code&amp;gt; to be a &amp;lt;code&amp;gt;xts&amp;lt;/code&amp;gt; series the &amp;lt;code&amp;gt;plot&amp;lt;/code&amp;gt; function automatically picks up the date information. As you can see there is significant variation through time with the correaltion typically varying between 0.2 and 0.5.&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s plot all three correlations between the three assets.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
plot(as.xts(cor1[1,2,]),main=&amp;amp;quot;IBM and BP&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[1,3,]),main=&amp;amp;quot;IBM and Google&amp;amp;quot;)&lt;br /&gt;
plot(as.xts(cor1[2,3,]),main=&amp;amp;quot;BP and Google&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-29-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Forecasts ==&lt;br /&gt;
&lt;br /&gt;
Often you will want to use your estimated model to produce forecasts for the covariance or correlation matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dccf1 &amp;amp;lt;- dccforecast(fit1, n.ahead = 10)&lt;br /&gt;
dccf1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## &lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## *       DCC GARCH Forecast        *&lt;br /&gt;
## *---------------------------------*&lt;br /&gt;
## &lt;br /&gt;
## Distribution         :  mvnorm&lt;br /&gt;
## Model                :  DCC(1,1)&lt;br /&gt;
## Horizon              :  10&lt;br /&gt;
## Roll Steps           :  0&lt;br /&gt;
## -----------------------------------&lt;br /&gt;
## &lt;br /&gt;
## 0-roll forecast: &lt;br /&gt;
## &lt;br /&gt;
## First 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2539 0.3562&lt;br /&gt;
## [2,] 0.2539 1.0000 0.2883&lt;br /&gt;
## [3,] 0.3562 0.2883 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.2658 0.3587&lt;br /&gt;
## [2,] 0.2658 1.0000 0.2909&lt;br /&gt;
## [3,] 0.3587 0.2909 1.0000&lt;br /&gt;
## &lt;br /&gt;
## . . .&lt;br /&gt;
## . . .&lt;br /&gt;
## &lt;br /&gt;
## Last 2 Correlation Forecasts&lt;br /&gt;
## , , 1&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3202 0.3703&lt;br /&gt;
## [2,] 0.3202 1.0000 0.3027&lt;br /&gt;
## [3,] 0.3703 0.3027 1.0000&lt;br /&gt;
## &lt;br /&gt;
## , , 2&lt;br /&gt;
## &lt;br /&gt;
##        [,1]   [,2]   [,3]&lt;br /&gt;
## [1,] 1.0000 0.3250 0.3714&lt;br /&gt;
## [2,] 0.3250 1.0000 0.3037&lt;br /&gt;
## [3,] 0.3714 0.3037 1.0000&amp;lt;/pre&amp;gt;&lt;br /&gt;
The actual forecasts for the correlation can be addresse via&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;Rf &amp;amp;lt;- dccf1@mforecast$R    # use H for the covariance forecast&amp;lt;/pre&amp;gt;&lt;br /&gt;
When checking the structure of &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;str(Rf)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## List of 1&lt;br /&gt;
##  $ : num [1:3, 1:3, 1:10] 1 0.254 0.356 0.254 1 ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
you realise that the object &amp;lt;code&amp;gt;Rf&amp;lt;/code&amp;gt; is a list with one element. It turns out that this one list item is then a 3 dimensional matrix/array which contains the the 10 forecasts of &amp;lt;math&amp;gt;3 \times 3&amp;lt;/math&amp;gt; correlation matrices. If we want to extract, say, the 10 forecasts for the correlation between IBM (1st asset) and BP (2nd asset), we have to do this in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;corf_IB &amp;amp;lt;- Rf[[1]][1,2,]  # Correlation forecasts between IBM and BP&lt;br /&gt;
corf_IG &amp;amp;lt;- Rf[[1]][1,3,]  # Correlation forecasts between IBM and Google&lt;br /&gt;
corf_BG &amp;amp;lt;- Rf[[1]][2,3,]  # Correlation forecasts between BP and Google&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;[[1]]&amp;lt;/code&amp;gt; tells R to go to the first (and here only) list item and then &amp;lt;code&amp;gt;[1,2,]&amp;lt;/code&amp;gt; instructs R to select the (1,2) element of all available correlation matrices.&lt;br /&gt;
&lt;br /&gt;
As for the univariate volatililty model let us display the forecast along with the last in-sample estimates of correlation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(3,1))  # this creates a frame with 3 windows to be filled by plots&lt;br /&gt;
c_IB &amp;amp;lt;- c(tail(cor1[1,2,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IB &amp;amp;lt;- c(rep(NA,20),corf_IB) # gets the 10 forecasts&lt;br /&gt;
plot(c_IB,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and BP&amp;amp;quot;)&lt;br /&gt;
lines(cf_IB,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_IG &amp;amp;lt;- c(tail(cor1[1,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_IG &amp;amp;lt;- c(rep(NA,20),corf_IG) # gets the 10 forecasts&lt;br /&gt;
plot(c_IG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation IBM and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_IG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&lt;br /&gt;
&lt;br /&gt;
c_BG &amp;amp;lt;- c(tail(cor1[2,3,],20),rep(NA,10))  # gets the last 20 correlation observations&lt;br /&gt;
cf_BG &amp;amp;lt;- c(rep(NA,20),corf_BG) # gets the 10 forecasts&lt;br /&gt;
plot(c_BG,type = &amp;amp;quot;l&amp;amp;quot;,main=&amp;amp;quot;Correlation BP and Google&amp;amp;quot;)&lt;br /&gt;
lines(cf_BG,type = &amp;amp;quot;l&amp;amp;quot;, col = &amp;amp;quot;orange&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:GarchModelling_files/figure-html/unnamed-chunk-34-1.png]]&amp;lt;!-- --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Further thoughts =&lt;br /&gt;
&lt;br /&gt;
If you are looking at using pseudo-out-of sample forecasting (i.e. pretend to forecast values that actually have already occured) you should explore the &amp;lt;code&amp;gt;out.sample&amp;lt;/code&amp;gt; option of the &amp;lt;code&amp;gt;dccfit&amp;lt;/code&amp;gt; function.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;rmgarch&amp;lt;/code&amp;gt; package also allows you to estimate multivariate factor GARCH models and copula GARCH models (check the [https://cran.r-project.org/web/packages/rmgarch/vignettes/The_rmgarch_models.pdf documentation] for more details.&lt;br /&gt;
&lt;br /&gt;
An alternative package with a slightly different set of multivariate volatility models is the `&amp;lt;code&amp;gt;ccgarch&amp;lt;/code&amp;gt; package.&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=R&amp;diff=4232</id>
		<title>R</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=R&amp;diff=4232"/>
				<updated>2018-05-03T22:52:32Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /* Intermediate Techniques */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;R is an open source software that has been been adopted by the statistical community as its standard software package. It is a command driven software, meaning that you will have to give the software written commands to indicate what you do. On first sight this is not as convenient as a menu driven software, but it has the huge advantage that you can collect a large set of commands in a file (script file) and then have R execute all these commands in one go. This then serves as a great documentation of the work you have done and most importantly it makes it easy to change a small aspect of your work and rerun the entire project on the press of a button rather than having to laboriously retrace all your steps through menus.&lt;br /&gt;
&lt;br /&gt;
The fixed cost of learning this software is higher than learning a menu driven statistical software package. But if you engage with this process the rewards will be great.&lt;br /&gt;
&lt;br /&gt;
Last not least, R has a killer advantage. It is free!!!&lt;br /&gt;
&lt;br /&gt;
== Installing the Software ==&lt;br /&gt;
&lt;br /&gt;
[https://youtu.be/EHjakj38Nnw?hd=1 Installation Demonstration]&lt;br /&gt;
&lt;br /&gt;
To work with R you will have to install the basic software package R, but we also advise you to install RStudio, which is an add-on to R (formally called an Integrated Development Environment - IDE) which makes working with R easier.&lt;br /&gt;
&lt;br /&gt;
As this is open-source software that you get for free it is perhaps understandable that the webpages from which you get the R software aren&amp;#039;t as slick as you expect. And the language tends to be somewhat more techy, but don&amp;#039;t worry, you&amp;#039;ll be fine.&lt;br /&gt;
&lt;br /&gt;
So here are the steps you should take. &lt;br /&gt;
&lt;br /&gt;
# Download and install the R software, which is available from the [http://cran.rstudio.com/ CRAN] website. Follow the &amp;quot;Download and Install R&amp;quot; link (and do not be tempted to download the source code!) for your operating system. If you have a window OS only choose the &amp;quot;base&amp;quot; package on the following screen. Then follow the usual installation instructions. You could now already work with R, but we recommend that you first undertake the next step.&lt;br /&gt;
# Once we have installed R, we can download and install RStudio. You can download it from the [http://www.rstudio.com/products/rstudio/download/ RStudio] download page.&lt;br /&gt;
&lt;br /&gt;
The basic R software has some basic functionality, but the power of R comes from the ability to use code written to perform statistical and econometric techniques that has been written by other people. These additional pieces of software are called packages and the next step will be to learn how ot use these.&lt;br /&gt;
&lt;br /&gt;
== Data Sets ==&lt;br /&gt;
&lt;br /&gt;
We use a number of datasets on this page. For convenience they are listed here:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| &lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Women&amp;#039;s wages &lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Crime Statistics&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Baseball Wages&lt;br /&gt;
|-&lt;br /&gt;
| &amp;#039;&amp;#039;&amp;#039;Description&amp;#039;&amp;#039;&amp;#039; &lt;br /&gt;
| Observations for 753 females on wages, familiar and work circumstances hours worked and wages&lt;br /&gt;
| Crime Statistics for 90 counties in North Carolina (US) for Years 1981 to 1987 (Panel Data); includes a number of variables to characterise the counties&lt;br /&gt;
| Salary and other information (such as race, position and performance information) for 353 Baseball Players in 1993&lt;br /&gt;
|-&lt;br /&gt;
| &amp;#039;&amp;#039;&amp;#039;Files&amp;#039;&amp;#039;&amp;#039; &lt;br /&gt;
| [[media:mroz.xls|mroz.xls]] &amp;lt;br&amp;gt; [[media:mroz.csv|mroz.csv]] &amp;lt;br&amp;gt; [[MROZ_Variable_Description|Variable Description]]&lt;br /&gt;
| [[media:crim4.xls|crime4.xls]]  &amp;lt;br&amp;gt; [[media:crim4.csv|crime4.csv]] &amp;lt;br&amp;gt; [[Crim4_Variable_Description|Variable Description]]&lt;br /&gt;
| [[media:mlb1.xls|mlb1.xls]] &amp;lt;br&amp;gt; [[media:mlb1.csv|mlb1.csv]] &amp;lt;br&amp;gt; [[MLB1_Variable_Description|Variable Description]]&lt;br /&gt;
|-&lt;br /&gt;
| &amp;#039;&amp;#039;&amp;#039;Source&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
| [http://www.cengagebrain.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&amp;amp;product_isbn_issn=9781111531041&amp;amp;token=8D04240DC39B22D05B49B265F2C8E62C6876DDE99FE979BC4A500075EC976963ED1045639B2C75C4B5B2337F07088998 Wooldridge Book Companion Page]&lt;br /&gt;
| [http://www.cengagebrain.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&amp;amp;product_isbn_issn=9781111531041&amp;amp;token=8D04240DC39B22D05B49B265F2C8E62C6876DDE99FE979BC4A500075EC976963ED1045639B2C75C4B5B2337F07088998 Wooldridge Book Companion Page]&lt;br /&gt;
| [http://www.cengagebrain.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&amp;amp;product_isbn_issn=9781111531041&amp;amp;token=8D04240DC39B22D05B49B265F2C8E62C6876DDE99FE979BC4A500075EC976963ED1045639B2C75C4B5B2337F07088998 Wooldridge Book Companion Page]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Following the links in the above table you will also be able to download R data files for these datasets.&lt;br /&gt;
&lt;br /&gt;
== Basic Tasks ==&lt;br /&gt;
&lt;br /&gt;
To illustrate how to perform basic tasks in R we will use the Women&amp;#039;s wages dataset ([[media:mroz.csv|mroz.csv]]). This is a comma separated value (csv) file that contains a dataset which we will use for our first steps in R. It is a well used cross-sectional dataset with 753 observations of female members of the labour force in the US (in 1975). It contains variables such as the number of children, the wage, the hours worked etc. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| First Steps&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Loading Data and&amp;lt;br&amp;gt;Date Formats &lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Using&amp;lt;br&amp;gt;Packages&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| [[R_FirstSteps|Discussion]] &lt;br /&gt;
| [[R_Data|Discussion]]&lt;br /&gt;
| [[R_Packages|Discussion]]  &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Basic Data&amp;lt;br&amp;gt;Analysis&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Data Analysis&amp;lt;br&amp;gt;Tidyverse&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| A&amp;lt;br&amp;gt;Regression&lt;br /&gt;
|-&lt;br /&gt;
| [[R_Analysis|Discussion]] &lt;br /&gt;
| [[R_AnalysisTidy|Discussion]] &lt;br /&gt;
| [[R_Regression|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Creating &amp;lt;br&amp;gt; Graphics&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Saving Data and&amp;lt;br&amp;gt;Screen Output&lt;br /&gt;
|-&lt;br /&gt;
| [[R_Graphing|Discussion]] &amp;lt;br&amp;gt; [[R_Graphing_Treat|Treat Yourself]]&lt;br /&gt;
| [[R_SavingData|Discussion]] &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Bread and Butter Techniques ==&lt;br /&gt;
&lt;br /&gt;
These are standard econometric problems tasks that any applied econometrician, and indeed aspiring economics students, should be familiar with.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Dummy&amp;lt;br&amp;gt;variables&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Predicting from&amp;lt;br&amp;gt;a Regression&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| [[Dummy Variables in R|Discussion]] &lt;br /&gt;
| [[Predicting from Regression in R|Discussion]] &lt;br /&gt;
 &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Standard&amp;lt;br&amp;gt;inference&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Regression&amp;lt;br&amp;gt;diagnostics&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Robust&amp;lt;br&amp;gt;standard errors&lt;br /&gt;
|-&lt;br /&gt;
| [[Regression Inference in R|Discussion]] &lt;br /&gt;
| [[R_reg_diag|Discussion]] &lt;br /&gt;
| [[R_robust_se|Discussion]]  &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Intermediate Techniques ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Panel&amp;lt;br&amp;gt;Data&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Instrumental Variables&amp;lt;br&amp;gt;Estimation&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Matching&lt;br /&gt;
|-&lt;br /&gt;
| [[Panel in R|Discussion]] &lt;br /&gt;
| [[IV in R|Discussion]] &lt;br /&gt;
| [[R_Matching|Discussion]]  &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Univariate Time&amp;lt;br&amp;gt;Series Modelling&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Multivariate Time&amp;lt;br&amp;gt;Series Modelling&amp;lt;br&amp;gt;VAR&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Time Series&amp;lt;br&amp;gt;Plotting&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Univariate and&amp;lt;br&amp;gt;Multivariate&amp;lt;br&amp;gt;GARCH Modelling&lt;br /&gt;
|-&lt;br /&gt;
| [[R_TimeSeries|Discussion]] &lt;br /&gt;
| [[R_TS_VAR|Discussion]] &lt;br /&gt;
| [[R_TSplots|Discussion]] &amp;lt;br&amp;gt;uses the following data files:&amp;lt;br&amp;gt;[[Media:AggInfl.csv|AggInfl.csv]],[[Media:CoreInfl.csv|CoreInfl.csv]]&amp;lt;br&amp;gt;[[Media:EnergInfl.csv|EnergInfl.csv]],[[Media:FoodInfl.csv|FoodInfl.csv]]&lt;br /&gt;
| [[R_GARCH|Discussion]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Bayesian Estimation&amp;lt;br&amp;gt;Principle&lt;br /&gt;
|-&lt;br /&gt;
| [[R_BayesGrid|Discussion]] &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Some Fun Stuff ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Plotting &amp;lt;br&amp;gt;Maps&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Scraping &amp;lt;br&amp;gt;the internet&lt;br /&gt;
|-&lt;br /&gt;
| [[Maps in R|Discussion]] &lt;br /&gt;
| [[Scraping in R|Discussion]] &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Econometric Demonstrations ==&lt;br /&gt;
&lt;br /&gt;
In this section you can find code that can be useful to demonstrate a few econometric issues.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Sampling and&amp;lt;br&amp;gt;LLN and CLT&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Demonstrating OLS&amp;lt;br&amp;gt;estimator unbiasedness&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;| Demonstrating OLS estimator&amp;lt;br&amp;gt;asymptotic behaviour&lt;br /&gt;
|-&lt;br /&gt;
| [[R_Sampling|Discussion]]&lt;br /&gt;
| [[R_Unbiasedness|Discussion]]  &lt;br /&gt;
| [[R_Asymptotics|Discussion]] &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Authors, Maintenance and Contributions ==&lt;br /&gt;
&lt;br /&gt;
This wiki was created by [mailto:ralf.becker@manchester.ac.uk Ralf Becker] and [mailto:james.lincoln@manchester.ac.uk James Lincoln] with the financial support of a University of Manchester CHERIL grant. If you have any suggestions please contact us by email. Contributions to this wiki are encouraged. Please contact us if you are interested.&lt;br /&gt;
&lt;br /&gt;
An easy way to create content for this page is to write RMarkdown documents which can then easily be translated, thanks to pandoc, to MediaWiki format (see [http://nicercode.github.io/guides/reports/]). From the command window call &amp;quot;pandoc -f markdown -t MediaWiki FILENAME.md -o FILENAME.mediawiki&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== More references ==&lt;br /&gt;
&lt;br /&gt;
There is a plethora of resources if you want to learn R (which is one reason why this resource does not go into too much detail). Here are a few places to start.&lt;br /&gt;
&lt;br /&gt;
* A dedicated tweet channel for Econometrics with R [https://twitter.com/Rstats4Econ]&lt;br /&gt;
* Rob Hyndman has great material [http://robjhyndman.com/publications/software/], some of which will be referred to here.&lt;br /&gt;
* My colleague Junanjo Medina has material for criminologists that includes good intros to graphing and some basic statistics [http://jjmedinaariza.github.io/R-for-Criminologists/]&lt;br /&gt;
* [http://www.computerworld.com/article/2497143/business-intelligence-beginner-s-guide-to-r-introduction.html?null A Beginner&amp;#039;s Guide to R]&lt;br /&gt;
* Florian Heiss has written an R companion book to Wooldridge&amp;#039;s Introductory Econometrics. It is available for free [http://www.urfie.net/read/mobile/index.html#p=1 online] but you can also get a [http://www.urfie.net/index.html hardcopy] &lt;br /&gt;
* Some R resources provided by [http://www.ats.ucla.edu/stat/r/ UCLA]&lt;br /&gt;
* [http://www.statmethods.net Quick-R] web-site and [http://www.manning.com/kabacoff2/RiA2E_meap_ch1.pdf first chapter of R in Action]&lt;br /&gt;
* Just TryR it! [http://tryr.codeschool.com/levels/1/challenges/1]&lt;br /&gt;
* Some resource by the UCLA [http://www.ats.ucla.edu/stat/r/]&lt;br /&gt;
* A practice RData file [https://drive.google.com/file/d/0B-eFeuIjpKsOWmdpOUsxT2Via3M/view?usp=sharing], use this to load required packages [https://drive.google.com/file/d/0B-eFeuIjpKsOc3VYYnh1bEtZcnM/view?usp=sharing]&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=R_BayesGrid&amp;diff=4231</id>
		<title>R BayesGrid</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=R_BayesGrid&amp;diff=4231"/>
				<updated>2018-03-21T00:15:51Z</updated>
		
		<summary type="html">&lt;p&gt;Rb: /* Posterior probabilities for hypothesis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
Here we demonstrate how a Beyesian Updating Algorithm works. It is illustrated for the simplest of all cases, a binary variable.&lt;br /&gt;
&lt;br /&gt;
As usual we start by setting the working director (&amp;#039;&amp;#039;&amp;#039;Make sure you set it to your relevant directory&amp;#039;&amp;#039;&amp;#039;) and by loading up a couple of useful libraries&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;setwd(&amp;amp;quot;YOURFULLDIRECTORPATH&amp;amp;quot;) &lt;br /&gt;
library(car)&lt;br /&gt;
library(AER)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Loading the data - Global Temperature Data. =&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s work with some Global Temperature Data [[media: Global_Temperature.csv]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;td &amp;amp;lt;- read.csv(&amp;amp;quot;Global Temperature.csv&amp;amp;quot;)&lt;br /&gt;
td$Temp &amp;amp;lt;- ts(td$Temp,start = c(1850), end=c(2015),frequency = 1)&lt;br /&gt;
td$CO2emission &amp;amp;lt;- ts(td$CO2emission,start = c(1850), end=c(2015),frequency = 1)&lt;br /&gt;
summary(td)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;##       year           Temp           CO2emission  &lt;br /&gt;
##  Min.   :1850   Min.   :-0.54700   Min.   :  54  &lt;br /&gt;
##  1st Qu.:1891   1st Qu.:-0.30025   1st Qu.: 356  &lt;br /&gt;
##  Median :1932   Median :-0.17300   Median : 983  &lt;br /&gt;
##  Mean   :1932   Mean   :-0.10509   Mean   :2258  &lt;br /&gt;
##  3rd Qu.:1974   3rd Qu.: 0.03375   3rd Qu.:4053  &lt;br /&gt;
##  Max.   :2015   Max.   : 0.74500   Max.   :9167  &lt;br /&gt;
##                                    NA&amp;#039;s   :5&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(td$Temp,main = &amp;amp;quot;Global Temperature (deviations)&amp;amp;quot;, col = &amp;amp;quot;blue&amp;amp;quot;, lwd = 2)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:TempData.png|frame|none|alt=|Temperature Deviations]]&lt;br /&gt;
&lt;br /&gt;
= Preparing the Data =&lt;br /&gt;
&lt;br /&gt;
First we calculate the difference series&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;dtemp &amp;amp;lt;- diff(td$Temp)  # temperature changes&lt;br /&gt;
ups &amp;amp;lt;- (dtemp&amp;amp;gt;0)        # = 1 if increase, 0 otherwise&lt;br /&gt;
n &amp;amp;lt;- length(ups)        # number of years available&amp;lt;/pre&amp;gt;&lt;br /&gt;
= Frequentist approach =&lt;br /&gt;
&lt;br /&gt;
We estimate the sample mean and get a standard deviation for the sample mean:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;pbar &amp;amp;lt;- mean(ups)&lt;br /&gt;
sd_pbar &amp;amp;lt;- sqrt(pbar*(1-pbar)/n)&amp;lt;/pre&amp;gt;&lt;br /&gt;
A t-test for testing the hypothesis that &amp;lt;math&amp;gt;H_0: \mu=0.5&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;H_A: \mu &amp;gt; 0.5&amp;lt;/math&amp;gt; is 0.7016937 and hence we would not reject the null hypothesis at any sensible significance level.&lt;br /&gt;
&lt;br /&gt;
= Bayesian Approach =&lt;br /&gt;
&lt;br /&gt;
The parameter of interest, a proportion (or probability) can take an infinite number of values, but to make the problem easily computable we shall create a disrete grid (with k elements)on which we perform calclulations&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;pval &amp;amp;lt;- seq(0,1,0.01)  # creates (0.00, 0.01, 0.02, ... ,0.98,0.99,1.0)&lt;br /&gt;
k &amp;amp;lt;- length(pval)   # number of values f&amp;lt;/pre&amp;gt;&lt;br /&gt;
Really we would want to use a finer grid and you could easily change this by changing the stepsize.&lt;br /&gt;
&lt;br /&gt;
== Defining the initial Prior Distributions ==&lt;br /&gt;
&lt;br /&gt;
Prior 1: Normal (m1,sd1^2)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;m1 &amp;amp;lt;- 0.4;&lt;br /&gt;
sd1 &amp;amp;lt;- 0.05;&lt;br /&gt;
fp_prior1_in &amp;amp;lt;- pnorm(pval+0.005,m1,sd1);    # CDF&lt;br /&gt;
fp_prior1_in &amp;amp;lt;- append(diff(fp_prior1_in),0,0)   # discretised probs&amp;lt;/pre&amp;gt;&lt;br /&gt;
Prior 2: Normal (m2,sd2^2)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;m2 &amp;amp;lt;- 0.6;&lt;br /&gt;
sd2 &amp;amp;lt;- 0.05;&lt;br /&gt;
fp_prior2_in &amp;amp;lt;- pnorm(pval+0.005,m2,sd2);    # CDF&lt;br /&gt;
fp_prior2_in &amp;amp;lt;- append(diff(fp_prior2_in),0,0)   # discretised probs&amp;lt;/pre&amp;gt;&lt;br /&gt;
Prior 3: Uniform&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;fp_prior3_in = rep(1,length(pval))*(1/length(pval))&amp;lt;/pre&amp;gt;&lt;br /&gt;
Prior 4: Beta(5,3)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;fp_prior4_in &amp;amp;lt;- pbeta(pval,5,3)&lt;br /&gt;
fp_prior3_in &amp;amp;lt;- append(diff(fp_prior4_in),0,0)   # discretised probs&amp;lt;/pre&amp;gt;&lt;br /&gt;
Plot the Prior distributions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(pval,fp_prior1_in, col = &amp;amp;quot;skyblue3&amp;amp;quot;, ylim = c(0,0.1), type = &amp;amp;quot;l&amp;amp;quot; , lwd = 3, main = &amp;amp;quot;Prior Distributions&amp;amp;quot;)&lt;br /&gt;
lines(pval,fp_prior2_in, col = &amp;amp;quot;hotpink3&amp;amp;quot;, lwd = 3)&lt;br /&gt;
lines(pval,fp_prior3_in, col = &amp;amp;quot;olivedrab3&amp;amp;quot;, lwd = 3)&lt;br /&gt;
lines(pval,fp_prior4_in, col = &amp;amp;quot;orange&amp;amp;quot;, lwd = 3)&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:Priors.png|frame|none|alt=|Four different Priors]]&lt;br /&gt;
&lt;br /&gt;
== Updating ==&lt;br /&gt;
&lt;br /&gt;
Let&amp;#039;s say we have a year with temp increase, then the probability at each pval, pval[i], is updated as follows: P(pval(i)|up) = P(pval(i) and up)/P(up) = fp_prior(i)&amp;#039;&amp;#039;pval(i)/P(up) where p(up) = sum over all j(fp_prior(j)&amp;#039;&amp;#039;pval(j))&lt;br /&gt;
&lt;br /&gt;
We start by defining new variables with the prior distributions. We only do this, as in what comes &amp;lt;code&amp;gt;fo_prior&amp;lt;/code&amp;gt;s will change but we also want to keep the initial prior distributions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;fp_prior1 &amp;amp;lt;- fp_prior1_in&lt;br /&gt;
fp_prior2 &amp;amp;lt;- fp_prior2_in&lt;br /&gt;
fp_prior3 &amp;amp;lt;- fp_prior3_in&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;# we loop through the n years for which we have observations&lt;br /&gt;
for(sim in 1:n){  &lt;br /&gt;
  # Calculations for Prior 1&lt;br /&gt;
  temp1 &amp;amp;lt;- rep(0,k)  # save the joint probability in here&lt;br /&gt;
  # Now we calculate likelihood * prior&lt;br /&gt;
  # we loop through all possible values for pval&lt;br /&gt;
  for (i in 1:k){&lt;br /&gt;
    temp1[i] &amp;amp;lt;- fp_prior1[i]*pval[i]*ups[sim] + fp_prior1[i]*(1-pval[i])*(1-ups[sim])&lt;br /&gt;
  }&lt;br /&gt;
  # Now we need to normalise the result in temp1 such that all &lt;br /&gt;
  # probabilities sum to 1 = posterior distribution&lt;br /&gt;
  fp_post1 &amp;amp;lt;- temp1 / sum(temp1)&lt;br /&gt;
  # in preparation for the next iteration (next year&amp;#039;s data) we save &lt;br /&gt;
  # the current posterior to be next period&amp;#039;s prior&lt;br /&gt;
  fp_prior1 &amp;amp;lt;- fp_post1  &lt;br /&gt;
  &lt;br /&gt;
  # Calculations for Prior 2&lt;br /&gt;
  temp2 &amp;amp;lt;- rep(0,k)  &lt;br /&gt;
  for (i in 1:k){&lt;br /&gt;
    temp2[i] &amp;amp;lt;- fp_prior2[i]*pval[i]*ups[sim] + fp_prior2[i]*(1-pval[i])*(1-ups[sim])&lt;br /&gt;
  }&lt;br /&gt;
  fp_post2 &amp;amp;lt;- temp2 / sum(temp2)&lt;br /&gt;
  fp_prior2 &amp;amp;lt;- fp_post2  &lt;br /&gt;
  &lt;br /&gt;
  # Calculations for Prior 3&lt;br /&gt;
  temp3 &amp;amp;lt;- rep(0,k)  &lt;br /&gt;
  for (i in 1:k){&lt;br /&gt;
    temp3[i] &amp;amp;lt;- fp_prior3[i]*pval[i]*ups[sim] + fp_prior3[i]*(1-pval[i])*(1-ups[sim])&lt;br /&gt;
  }&lt;br /&gt;
  fp_post3 &amp;amp;lt;- temp3 / sum(temp3)&lt;br /&gt;
  fp_prior3 &amp;amp;lt;- fp_post3  &lt;br /&gt;
  &lt;br /&gt;
  # plot how the posterior updates&lt;br /&gt;
  # uncomment this if you want to see the development of the posterior&lt;br /&gt;
  # plot(pval,fp_prior1_in, col = &amp;amp;quot;skyblue&amp;amp;quot;, ylim = c(0,0.15), type = &amp;amp;quot;l&amp;amp;quot;, xlab=&amp;amp;quot;pval&amp;amp;quot;, ylab=&amp;amp;quot;density&amp;amp;quot;, main = &amp;amp;quot;Posterior Distributions&amp;amp;quot;)&lt;br /&gt;
  # lines(pval,fp_post1, col = &amp;amp;quot;skyblue3&amp;amp;quot;, lwd = 3)&lt;br /&gt;
  # lines(pval,fp_prior2_in, col = &amp;amp;quot;hotpink&amp;amp;quot;)&lt;br /&gt;
  # lines(pval,fp_post2, col = &amp;amp;quot;hotpink3&amp;amp;quot;, lwd = 3)&lt;br /&gt;
  # lines(pval,fp_prior3_in, col = &amp;amp;quot;olivedrab1&amp;amp;quot;)&lt;br /&gt;
  # lines(pval,fp_post3, col = &amp;amp;quot;olivedrab3&amp;amp;quot;, lwd = 3)&lt;br /&gt;
  # text(0.15, 0.125, td$year[sim+1],cex = 3)  # adds the year to the plot&lt;br /&gt;
  &lt;br /&gt;
  # the following lines are merely to slow the loop down so that &lt;br /&gt;
  # we can actually see how the posterior develops - uncomment of needed&lt;br /&gt;
  # if (sim &amp;amp;lt; 4) {Sys.sleep(1.5)}&lt;br /&gt;
  # else if (sim &amp;amp;lt; 10) {Sys.sleep(0.5)}&lt;br /&gt;
  # else {Sys.sleep(0.05)}&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
Let&amp;#039;s compare the priors and the posteriors&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;plot(pval,fp_prior1_in, col = &amp;amp;quot;skyblue&amp;amp;quot;, ylim = c(0,0.15), type = &amp;amp;quot;l&amp;amp;quot; ,xlab=&amp;amp;quot;pval&amp;amp;quot;, ylab=&amp;amp;quot;density&amp;amp;quot;, main = &amp;amp;quot;Posterior Distributions&amp;amp;quot;)&lt;br /&gt;
lines(pval,fp_post1, col = &amp;amp;quot;skyblue3&amp;amp;quot;, lwd = 3)&lt;br /&gt;
lines(pval,fp_prior2_in, col = &amp;amp;quot;hotpink&amp;amp;quot;)&lt;br /&gt;
lines(pval,fp_post2, col = &amp;amp;quot;hotpink3&amp;amp;quot;, lwd = 3)&lt;br /&gt;
lines(pval,fp_prior3_in, col = &amp;amp;quot;olivedrab1&amp;amp;quot;)&lt;br /&gt;
lines(pval,fp_post3, col = &amp;amp;quot;olivedrab3&amp;amp;quot;, lwd = 3)&lt;br /&gt;
text(0.15, 0.125, td$year[sim+1],cex = 3)  # adds the year to the plot&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:Posteriors.png|frame|none|alt=|Posterior distributions for priors 1, 2 and 3.]]&lt;br /&gt;
&lt;br /&gt;
== Posterior probabilities for hypothesis ==&lt;br /&gt;
&lt;br /&gt;
So let&amp;#039;s ask again the fundamental question. What is the probability that the parameter &amp;lt;math&amp;gt;\pi&amp;lt;/math&amp;gt; has a value &amp;lt;math&amp;gt;&amp;gt;0.5&amp;lt;/math&amp;gt;? Recall that in a classical framework we cannot get this value! Here we will &amp;amp;quot;merely&amp;amp;quot; have to read off probabilities from the posterior distributions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;print(&amp;amp;quot;POSTERIOR PROBABILITIES&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;POSTERIOR PROBABILITIES&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;pp1 &amp;amp;lt;- round(sum(fp_post1[pval&amp;amp;gt;0.5]),4)&lt;br /&gt;
print(&amp;amp;quot;Prior 1&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Prior 1&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;print(paste0(&amp;amp;quot;P(pi &amp;amp;gt; 0.5) = &amp;amp;quot;,pp1))&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;P(pi &amp;amp;gt; 0.5) = 0.2019&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;pp2 &amp;amp;lt;- round(sum(fp_post2[pval&amp;amp;gt;0.5]),4)&lt;br /&gt;
print(&amp;amp;quot;Prior 2&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Prior 2&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;print(paste0(&amp;amp;quot;P(pi &amp;amp;gt; 0.5) = &amp;amp;quot;,pp2))&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;P(pi &amp;amp;gt; 0.5) = 0.9471&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;pp3 &amp;amp;lt;- round(sum(fp_post3[pval&amp;amp;gt;0.5]),4)&lt;br /&gt;
print(&amp;amp;quot;Prior 3&amp;amp;quot;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;Prior 3&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;print(paste0(&amp;amp;quot;P(pi &amp;amp;gt; 0.5) = &amp;amp;quot;,pp3))&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;## [1] &amp;amp;quot;P(pi &amp;amp;gt; 0.5) = 0.7624&amp;amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
Let&amp;#039;s show these probabilities graphically.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(1,3))  # tell R to put figures into (1 x 3) matrix&lt;br /&gt;
&lt;br /&gt;
plot(pval,fp_post1, col = &amp;amp;quot;skyblue3&amp;amp;quot;, xlim = c(0.3,0.7), ylim = c(0,0.15),lwd = 3, type = &amp;amp;quot;l&amp;amp;quot; , xlab=&amp;amp;quot;pval&amp;amp;quot;, ylab=&amp;amp;quot;density&amp;amp;quot;, main = &amp;amp;quot;P(pi &amp;amp;gt;0.5|data), Prior 1, N(0.4,0.05^2)&amp;amp;quot;)&lt;br /&gt;
cord.x &amp;amp;lt;- c(0.51,seq(0.51,1,0.01),1)&lt;br /&gt;
cord.y &amp;amp;lt;- c(0,fp_post1[pval&amp;amp;gt;0.5],0)&lt;br /&gt;
polygon(cord.x,cord.y,col=&amp;#039;skyblue&amp;#039;)&lt;br /&gt;
text(0.6, 0.025, pp1,cex = 3)  # adds the prob&lt;br /&gt;
&lt;br /&gt;
plot(pval,fp_post2, col = &amp;amp;quot;hotpink3&amp;amp;quot;, xlim = c(0.3,0.7), ylim = c(0,0.15),lwd = 3, type = &amp;amp;quot;l&amp;amp;quot; , xlab=&amp;amp;quot;pval&amp;amp;quot;, ylab=&amp;amp;quot;density&amp;amp;quot;, main = &amp;amp;quot;P(pi &amp;amp;gt;0.5|data), Prior 2, N(0.6,0.05^2)&amp;amp;quot;)&lt;br /&gt;
cord.x &amp;amp;lt;- c(0.51,seq(0.51,1,0.01),1)&lt;br /&gt;
cord.y &amp;amp;lt;- c(0,fp_post2[pval&amp;amp;gt;0.5],0)&lt;br /&gt;
polygon(cord.x,cord.y,col=&amp;#039;plum1&amp;#039;)&lt;br /&gt;
text(0.6, 0.025, pp2,cex = 3)  # adds the prob&lt;br /&gt;
&lt;br /&gt;
plot(pval,fp_post3, col = &amp;amp;quot;olivedrab3&amp;amp;quot;, xlim = c(0.3,0.7), ylim = c(0,0.15),lwd = 3, type = &amp;amp;quot;l&amp;amp;quot; , xlab=&amp;amp;quot;pval&amp;amp;quot;, ylab=&amp;amp;quot;density&amp;amp;quot;, main = &amp;amp;quot;P(pi &amp;amp;gt;0.5|data), Prior 1, U(0,1)&amp;amp;quot;)&lt;br /&gt;
cord.x &amp;amp;lt;- c(0.51,seq(0.51,1,0.01),1)&lt;br /&gt;
cord.y &amp;amp;lt;- c(0,fp_post3[pval&amp;amp;gt;0.5],0)&lt;br /&gt;
polygon(cord.x,cord.y,col=&amp;#039;darkolivegreen1&amp;#039;)&lt;br /&gt;
text(0.6, 0.025, pp3,cex = 3)  # adds the prob&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[File:PosteriorProbs.png|frame|none|alt=|Posterior probabilities that pi &amp;gt; 0.5.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;r&amp;quot;&amp;gt;par(mfrow=c(1,1))  # tell R to put figures into (1 x 3) matrix&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Rb</name></author>	</entry>

	</feed>