<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://eclr.humanities.manchester.ac.uk/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=LG</id>
		<title>ECLR - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://eclr.humanities.manchester.ac.uk/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=LG"/>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php/Special:Contributions/LG"/>
		<updated>2026-05-16T05:46:36Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.30.1</generator>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Lnotes&amp;diff=3045</id>
		<title>Lnotes</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Lnotes&amp;diff=3045"/>
				<updated>2013-09-10T15:10:54Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Matrices =&lt;br /&gt;
&lt;br /&gt;
In the PreSession Maths course, a matrix was defined as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;A matrix is a rectangular array of numbers enclosed in parentheses, conventionally denoted by a capital letter. The number of rows (say &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt;) and&lt;br /&gt;
&lt;br /&gt;
the number of columns (say &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;) determine the order of the matrix (&amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\times&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;).&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
Two examples were given:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
P &amp;amp; =\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 3 &amp;amp; 4\\&lt;br /&gt;
3 &amp;amp; 1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ Q=\left[\begin{array}{rr}&lt;br /&gt;
2 &amp;amp; 3\\&lt;br /&gt;
4 &amp;amp; 3\\&lt;br /&gt;
1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
matrices of dimensions &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;3\times2&amp;lt;/math&amp;gt; respectively.&lt;br /&gt;
&lt;br /&gt;
Why study matrices for econometrics? Basically because a data set of several variables, e.g. on the weights and heights of 12 students, can be thought of as a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
D &amp;amp; =\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The properties of matrices can then be used to facilitate answering all the usual questions of econometrics - list not given here!&lt;br /&gt;
&lt;br /&gt;
Calculations with matrices with explicit numerical elements, as in the examples above is called matrix &amp;#039;&amp;#039;arithmetic&amp;#039;&amp;#039;. Matrix &amp;#039;&amp;#039;algebra&amp;#039;&amp;#039; is the algebra of matrices where the elements are not made explicit: this is what is really required for econometrics, as we shall see.&lt;br /&gt;
&lt;br /&gt;
As an example of this, a &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix might be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{ccc}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and would equal &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; above if the collection of &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; were given appropriate numerical values.&lt;br /&gt;
&lt;br /&gt;
A general &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is also a &amp;#039;&amp;#039;typical element &amp;#039;&amp;#039;notation for matrices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left\Vert a_{ij}\right\Vert ,\ \ \ \ \ i=1,...,m,j=1,...,n,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; is the element at the intersection of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row and &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th column in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;m\neq n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;#039;&amp;#039;rectangular &amp;#039;&amp;#039;matrix; when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a square matrix, having the same number of rows or columns.&lt;br /&gt;
&lt;br /&gt;
== Rows, columns and vectors ==&lt;br /&gt;
&lt;br /&gt;
Clearly, there is no reason why &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; cannot equal 1: so, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix with &amp;lt;math&amp;gt;n=1,&amp;lt;/math&amp;gt; i.e. with one column, is usually called a column vector. Similarly, a matrix with one row is a row vector.&lt;br /&gt;
&lt;br /&gt;
There are a lot of advantages to thinking of matrices as collections of row or column vectors, as we shall see. As an example, define the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; column vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\mathbf{,\ \ \ b}=\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and arrange as the columns of the &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\label{eq:axy}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, a column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; elements can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What happens when both &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; are equal to &amp;lt;math&amp;gt;1?&amp;lt;/math&amp;gt; Then, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, but it is also considered to be a real number, or &amp;#039;&amp;#039;scalar&amp;#039;&amp;#039; in the language of linear algebra:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[a_{11}\right]=a_{11}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is perhaps a little odd, but turns out to be a useful convention in a number of situations.&lt;br /&gt;
&lt;br /&gt;
== Transposition of vectors ==&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; in equation (1) can be seen as elements of column vectors, say:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],\ \ \ \boldsymbol{d}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This representation of row vectors as column vectors is a bit clumsy, so some transformation which converts a column vector into a row vector, and vice versa would be useful. The process of converting a column vector &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; into a row vector is called &amp;#039;&amp;#039;transposition, &amp;#039;&amp;#039;and the transposed version of &amp;lt;math&amp;gt;\mathbf{c}&amp;lt;/math&amp;gt; is denoted:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c}^{T} &amp;amp; =\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; superscript denoting transposition. In practice, a prime, &amp;lt;math&amp;gt;^{\prime},&amp;lt;/math&amp;gt; is used instead of &amp;lt;math&amp;gt;^{T}.&amp;lt;/math&amp;gt; However, whilst the prime is much simpler to write than the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; sign, it is also much easier to lose track of in writing out long or complicated expressions. So, it is best initially to use &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; to denote transposition rather than the prime &amp;lt;math&amp;gt;^{\prime}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can then be written via its rows as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
\mathbf{c}^{T}\\&lt;br /&gt;
\boldsymbol{d}^{T}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The same ideas can be applied to the matrices &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Q.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Operations with matrices =&lt;br /&gt;
&lt;br /&gt;
== Addition, subtraction and scalar multiplication ==&lt;br /&gt;
&lt;br /&gt;
For vectors, addition and subtraction are defined only for vectors of the same dimensions. If:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
y_{n}&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x+y} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}+y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}+y_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{x-y}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}-y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}-y_{n}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clearly, the addition or subtraction operation is &amp;#039;&amp;#039;elementwise. &amp;#039;&amp;#039;If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; have different dimensions, there will be some elements left over once all the elements of the smaller dimensioned vector have been used up.&lt;br /&gt;
&lt;br /&gt;
Another operation is &amp;#039;&amp;#039;scalar multiplication&amp;#039;&amp;#039;: if &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; is a real number or scalar, the product &amp;lt;math&amp;gt;\lambda\mathbf{x}&amp;lt;/math&amp;gt; is defined as: &amp;lt;math&amp;gt;\lambda\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that every element of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is multiplied by the same scalar &amp;lt;math&amp;gt;\lambda.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The two types of operation can be combined into the &amp;#039;&amp;#039;linear combination&amp;#039;&amp;#039; of vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right]+\left[\begin{array}{c}&lt;br /&gt;
\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mu y_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}+\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}+\mu y_{n}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equally, one can define the linear combination of vectors &amp;lt;math&amp;gt;\mathbf{x,y,}\ldots,\mathbf{z}&amp;lt;/math&amp;gt; by scalars &amp;lt;math&amp;gt;\lambda,\mu,\ldots,\nu&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}+\ldots+\nu\mathbf{z}&amp;lt;/math&amp;gt; with typical element: &amp;lt;math&amp;gt;\lambda x_{i}+\mu y_{i}+\ldots+\nu z_{i},&amp;lt;/math&amp;gt; provided that all the vectors have the same dimension.&lt;br /&gt;
&lt;br /&gt;
For matrices, these ideas carry over immediately: apply to each column of the matrices involved. For example, if &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{b}_{n}\end{array}\right],&amp;lt;/math&amp;gt; both &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; then addition and subtraction are defined elementwise, as for vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A+B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}+\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}+\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}+b_{ij}\right\Vert ,\\&lt;br /&gt;
A-B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}-\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}-\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}-b_{ij}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scalar multiplication of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; involves multiplying every column vector of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda,&amp;lt;/math&amp;gt; and therefore multiplying every element of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda A=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}\right\Vert .&amp;lt;/math&amp;gt; With the same idea for &amp;lt;math&amp;gt;B,&amp;lt;/math&amp;gt; the linear combination of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mu&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\lambda A+\mu B=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1}+\mu\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}+\mu\mathbf{b}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}+\mu b_{ij}\right\Vert .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, consider the matrices: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\lambda=1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mu=-2:&amp;lt;/math&amp;gt; then:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\lambda A+\mu B &amp;amp; = &amp;amp; A-2B\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
4 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; 7&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - vector products ==&lt;br /&gt;
&lt;br /&gt;
=== Inner product ===&lt;br /&gt;
&lt;br /&gt;
The simplest form of a matrix vector product is the case where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; consists of one row, so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;1\times n&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\mathbf{a}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right].&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the product &amp;lt;math&amp;gt;A\mathbf{x}=\mathbf{a}^{T}\mathbf{x}&amp;lt;/math&amp;gt; is called the &amp;#039;&amp;#039;inner product&amp;#039;&amp;#039; and is defined as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a}^{T}\mathbf{x} &amp;amp; =a_{1}x_{1}+\ldots+a_{n}x_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that the definition amounts to multiplying corresponding elements in &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and adding up the resultant products. Writing: &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x=}\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=a_{1}x_{1}+\ldots+a_{n}x_{n}&amp;lt;/math&amp;gt; motivates the familiar description of the &amp;#039;&amp;#039;across and down rule &amp;#039;&amp;#039;for this product: &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; is the ’multiply corresponding elements’ part of the definition.&lt;br /&gt;
&lt;br /&gt;
Notice that the result of the inner product is a real number, for example: &amp;lt;math&amp;gt;\mathbf{c}^{T}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{c}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=36+6=42.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, in the product &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x,}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have the same number of elements, &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; say, for the product to be defined. If &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; had different numbers of elements, there would be some elements of &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; left over or not used in the product: e.g.: &amp;lt;math&amp;gt;\mathbf{b}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{x=}\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; When the inner product of two vectors is defined, the vectors are said to be &amp;#039;&amp;#039;conformable&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Orthogonality ==&lt;br /&gt;
&lt;br /&gt;
Two vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; with the property that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0&amp;lt;/math&amp;gt; are said to be orthogonal to each other. For example, if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
-1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is clear that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0.&amp;lt;/math&amp;gt; This seems a rather innocuous definition, and yet the idea of orthogonality turns out to be extremely important in econometrics.&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; are thought of as points in &amp;lt;math&amp;gt;R^{2},&amp;lt;/math&amp;gt; and arrows are drawn from the origin to &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and to &amp;lt;math&amp;gt;\mathbf{y,}&amp;lt;/math&amp;gt; then the two arrows are perpendicular to each other - see Figure 1. If &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; were defined as: &amp;lt;math&amp;gt;\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the position of the &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; vector and the corresponding arrow would change, but the perpendicularity property would still hold.&lt;br /&gt;
&lt;br /&gt;
Figure 1&lt;br /&gt;
&lt;br /&gt;
[[File:orthy_example.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Matrix - vector products ===&lt;br /&gt;
&lt;br /&gt;
Since the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; has two rows, now denoted &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{1}^{T}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{2}^{T},&amp;lt;/math&amp;gt; there are two possible inner products with the vector:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]:\\&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x} &amp;amp; = &amp;amp; 42,\ \ \ \ \ \boldsymbol{\alpha}_{2}^{T}\mathbf{x}=33.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assembling the two inner product values into a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector defines the product of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; with the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x}\\&lt;br /&gt;
\boldsymbol{\alpha}_{2}^{T}\mathbf{x}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Focussing only on the part: &amp;lt;math&amp;gt;\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; one can see that each element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is obtained from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; argument.&lt;br /&gt;
&lt;br /&gt;
Sometimes this product is described as forming a &amp;#039;&amp;#039;linear combination &amp;#039;&amp;#039;of the columns of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; using the scalar elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=6\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]+3\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; More generally, if:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right],\ \ \ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
\lambda\\&lt;br /&gt;
\mu&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
A\mathbf{x} &amp;amp; = &amp;amp; \lambda\mathbf{a}+\mu\mathbf{b.}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The general version of these ideas for an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \mathbf{a}_{2} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right].&amp;lt;/math&amp;gt; is straightforward. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, then the vector &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is, by the &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
a_{11}x_{1}+\ldots+a_{1n}x_{n}\\&lt;br /&gt;
a_{21}x_{1}+\ldots+a_{2n}x_{n}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{m1}x_{1}+\ldots+a_{mn}x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{1j}x_{j}\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{2j}x_{j}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{mj}x_{j}&lt;br /&gt;
\end{array}\right],\label{eq:ab}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that the typical element, the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th, is &amp;lt;math&amp;gt;\sum\limits _{j=1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt; Equally, &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is the linear combination &amp;lt;math&amp;gt;\mathbf{a}_{1}x_{1}+\ldots+\mathbf{a}_{n}x_{n}&amp;lt;/math&amp;gt; of the columns of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - matrix products ==&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{a}_{1},\ldots,\mathbf{a}_{n},&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{b}_{1},\ldots,\mathbf{b}_{r}.&amp;lt;/math&amp;gt; Clearly, each product &amp;lt;math&amp;gt;A\mathbf{b}_{1},...,A\mathbf{b}_{r}&amp;lt;/math&amp;gt; exists, and is &amp;lt;math&amp;gt;m\times1.&amp;lt;/math&amp;gt; These products can be arranged as the columns of a matrix as &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]&amp;lt;/math&amp;gt; and this matrix is &amp;#039;&amp;#039;defined&amp;#039;&amp;#039; to be the product &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; of the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]=AB.&amp;lt;/math&amp;gt; By construction, this must be an &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix, since each column is &amp;lt;math&amp;gt;m\times1&amp;lt;/math&amp;gt; and there are &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; columns.&lt;br /&gt;
&lt;br /&gt;
This is not the usual presentation of the definition of the product of two matrices, which relies on the &amp;#039;&amp;#039;across and down rule&amp;#039;&amp;#039; mentioned earlier, and focusses on the elements of each matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt; Set:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \mathbf{b}_{2} &amp;amp; \ldots &amp;amp; \mathbf{b}_{r}\end{array}\right]\ \ \ \ \ \ \ \text{(by columns)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert b_{ik}\right\Vert ,\ \ \ \ \ i=1,...,n,k=1,...,r\ \ \ \ \ \ \ \text{(typical element)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \ \ \text{(the array)}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What does the typical element of the &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; look like? Start with the &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; which is &amp;lt;math&amp;gt;A\mathbf{b}_{k}.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element in &amp;lt;math&amp;gt;A\mathbf{b}_{k}&amp;lt;/math&amp;gt; is, from equation (2), the inner product of the elements of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\end{array}\right],&amp;lt;/math&amp;gt; with the elements of &amp;lt;math&amp;gt;\mathbf{b}_{k},&amp;lt;/math&amp;gt; so that the inner product is: &amp;lt;math&amp;gt;a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, the &amp;lt;math&amp;gt;ik&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;c_{ik}=a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt; We can see this arising from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; calculation by writing:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\label{eq:c_ab}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1k} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2k} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nk} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert \sum_{j=1}^{n}a_{ij}b_{jk}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These ideas are simple, but a little tedious. Numerical examples are equally tedious! As an example, using: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; we can find the matrix &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; such that&lt;br /&gt;
&lt;br /&gt;
# the first column of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; adds together the columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the second column is the difference of the first and second columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the third column is &amp;lt;math&amp;gt;2\times&amp;lt;/math&amp;gt; the first column of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the fourth column is zero.&lt;br /&gt;
&lt;br /&gt;
It is easy to check that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cccc}&lt;br /&gt;
8 &amp;amp; 4 &amp;amp; 12 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; -2 &amp;amp; 6 &amp;amp; 0&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Arithmetic calculations of matrix products almost always use the elementwise across and down formula. However, there are many situations in econometrics where algebraic rather than arithmetic arguments are required. In these cases, the viewpoint of matrix multiplication as linear combinations of columns is much more powerful.&lt;br /&gt;
&lt;br /&gt;
Clearly one can give many more examples of different dimensions and complexities - but the same basic rules apply. To multiply two matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; together, the number of columns in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; must match the number of rows in &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; - this is &amp;#039;&amp;#039;conformability&amp;#039;&amp;#039; in action again. The resulting product will have number of rows equal to the number in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and number of columns equal to the number in &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this conformability rule does not hold, then the product of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is not defined.&lt;br /&gt;
&lt;br /&gt;
== Matlab ==&lt;br /&gt;
&lt;br /&gt;
One should also say that as the dimensions of the matrices increases, so the tediousness of the calculations increase. The solution to this for numerical calculation is to appeal to the computer. Programs like Matlab and Excel (and a number of others, some of them free) resolve this difficulty easily.&lt;br /&gt;
&lt;br /&gt;
In Matlab, symbols for row or column vectors do not need any particular differentiation: they are distinguished by how they are defined. For example, the following Matlab commands define &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;as a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; as a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector, then display the contents of these variables, and do a calculation:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec = [1 2 3 4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec = [1;2;3;4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec&lt;br /&gt;
&lt;br /&gt;
rowvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec&lt;br /&gt;
&lt;br /&gt;
colvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec*colvec&lt;br /&gt;
&lt;br /&gt;
ans =&lt;br /&gt;
&lt;br /&gt;
30 &lt;br /&gt;
&lt;br /&gt;
So, the semi-colon indicates the end of a row in a matrix or vector; it can be replaced by a carriage return. Notice the difference in how a row vector and a column vector is defined. One can see that the product &amp;lt;code&amp;gt;rowvec*colvec&amp;lt;/code&amp;gt; is well defined, just because &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Matlab also allows elementwise multiplication of two vectors using the &amp;lt;math&amp;gt;\centerdot\ast&amp;lt;/math&amp;gt; operator: if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
y_{2}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{x}\centerdot\ast\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}y_{1}\\&lt;br /&gt;
x_{2}y_{2}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and one can see that the inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; can be obtained as the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; In Matlab, this would be obtained as: &amp;lt;math&amp;gt;\text{sum}\left(\mathbf{x}\centerdot\ast\mathbf{y}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the example above, this calculation fails since &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; sum(rowvec .* colvec) ??? &lt;br /&gt;
&lt;br /&gt;
Error using ==&amp;amp;gt; times Matrix dimensions must agree. &lt;br /&gt;
&lt;br /&gt;
For this to work, &amp;lt;code&amp;gt;rowvec&amp;lt;/code&amp;gt; would have to be transposed as &amp;lt;code&amp;gt;rowvec’&amp;lt;/code&amp;gt;, so that transposition in Matlab is very natural.&lt;br /&gt;
&lt;br /&gt;
Allowing for such difficulties, matrix multiplication in Matlab is very simple:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; A = [6 2; 3 5];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; B = [1 1 2 0;1 -1 0 0];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = A * B; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
 8 4 1&lt;br /&gt;
&lt;br /&gt;
 2 0 8 &lt;br /&gt;
&lt;br /&gt;
-2 6 0 &lt;br /&gt;
&lt;br /&gt;
Notice how the matrices are defined here through their rows. The &amp;lt;code&amp;gt;disp() &amp;lt;/code&amp;gt;command displays the contents of the object referred to.&lt;br /&gt;
&lt;br /&gt;
It is less natural in Matlab to define matrices by columns - a typical example of how mathematics and computing have conflicts of notation. However, once columns &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}&amp;lt;/math&amp;gt; have been defined, the concatenation operation &amp;lt;math&amp;gt;\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]&amp;lt;/math&amp;gt; collects the columns into a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; a = [6;2]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; b = [3;5]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = [a b]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
6 3 &lt;br /&gt;
&lt;br /&gt;
2 5 &lt;br /&gt;
&lt;br /&gt;
Notice that the &amp;lt;code&amp;gt;disp(C)&amp;lt;/code&amp;gt; command does not label the result that is printed out. Simply typing &amp;lt;code&amp;gt;C&amp;lt;/code&amp;gt; would preface the output by &amp;lt;code&amp;gt;C =&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Pre and Post Multiplication ==&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; as above, say that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;pre-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; and that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;post-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This distinction between &amp;#039;&amp;#039;pre &amp;#039;&amp;#039;and &amp;#039;&amp;#039;post &amp;#039;&amp;#039;multiplication is important, in the following sense. Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are matrices such that the products &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined. If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; rows for &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; to be defined. For &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; to be defined, &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; columns to match the &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when both products are defined, there is no reason for the two products coincide. The first thing to notice is that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;m\times m,&amp;lt;/math&amp;gt; matrix, whilst &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; matrix. Different sized matrices cannot be equal. To illustrate, use the matrices: &amp;lt;math&amp;gt;B_{2}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right],\ \ \ C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]:&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B_{2}C &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrr}&lt;br /&gt;
27 &amp;amp; -3 &amp;amp; -15\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-15 &amp;amp; -1 &amp;amp; 8&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
CB_{2} &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
49 &amp;amp; -11\\&lt;br /&gt;
31 &amp;amp; 15&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; matrices, the products can differ: for example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
8 &amp;amp; 4\\&lt;br /&gt;
8 &amp;amp; -2&lt;br /&gt;
\end{array}\right],\ \ \ \ \ BA=\left[\begin{array}{cc}&lt;br /&gt;
9 &amp;amp; 7\\&lt;br /&gt;
3 &amp;amp; -3&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In cases where &amp;lt;math&amp;gt;AB=BA,&amp;lt;/math&amp;gt; the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are said to &amp;#039;&amp;#039;commute&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Transposition ==&lt;br /&gt;
&lt;br /&gt;
A column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; can be converted to a row vector &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by transposition: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ \mathbf{x}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
x_{1} &amp;amp; \ldots &amp;amp; x_{n}\end{array}\right].&amp;lt;/math&amp;gt; Transposing &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;\left(\mathbf{x}^{T}\right)^{T}&amp;lt;/math&amp;gt; reproduces the original vector &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; How do these ideas carry over to matrices?&lt;br /&gt;
&lt;br /&gt;
If the &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right],&amp;lt;/math&amp;gt; the transpose of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; is defined as the matrix whose &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; are &amp;lt;math&amp;gt;\mathbf{a}_{i}^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{c}&lt;br /&gt;
\mathbf{a}_{1}^{T}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mathbf{a}_{n}^{T}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; In terms of elements, if: &amp;lt;math&amp;gt;\mathbf{a}_{i}=\left[\begin{array}{c}&lt;br /&gt;
a_{1i}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{ni}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ A^{T}=\left[\begin{array}{rrrrr}&lt;br /&gt;
a_{11} &amp;amp; \ldots &amp;amp; a_{i1} &amp;amp; \ldots &amp;amp; a_{m1}\\&lt;br /&gt;
a_{12} &amp;amp; \ldots &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{m2}\\&lt;br /&gt;
\vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{1n} &amp;amp; \ldots &amp;amp; a_{in} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; One can see that the first column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; has now become the first row of &amp;lt;math&amp;gt;A^{T}.&amp;lt;/math&amp;gt; Notice too that &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times m&amp;lt;/math&amp;gt; matrix if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix.&lt;br /&gt;
&lt;br /&gt;
Transposing &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; takes the first column of &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; and writes it as a row, which coincides with the first row of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; The same argument applies to the other columns of &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\left(A^{T}\right)^{T}=A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== The product rule for transposition ===&lt;br /&gt;
&lt;br /&gt;
This states that if &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;C^{T}=B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How to see this? Consider the following example: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; b_{13} &amp;amp; b_{14}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; b_{23} &amp;amp; b_{24}\\&lt;br /&gt;
b_{31} &amp;amp; b_{32} &amp;amp; b_{33} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;c_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=\sum_{k=1}^{3}a_{2k}b_{k3}.\label{eq:c23}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that: &amp;lt;math&amp;gt;B^{T}A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
b_{11} &amp;amp; b_{21} &amp;amp; b_{31}\\&lt;br /&gt;
b_{12} &amp;amp; b_{22} &amp;amp; b_{32}\\&lt;br /&gt;
b_{13} &amp;amp; b_{23} &amp;amp; b_{33}\\&lt;br /&gt;
b_{14} &amp;amp; b_{24} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
a_{11} &amp;amp; a_{21}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that the &amp;lt;math&amp;gt;\left(3,2\right)&amp;lt;/math&amp;gt; element of this product is actually &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;b_{13}a_{21}+b_{23}a_{22}+b_{33}a_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=c_{23}.&amp;lt;/math&amp;gt; In summation notation, we see that from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;c_{23}=\sum_{k=1}^{3}b_{k3}a_{2k},&amp;lt;/math&amp;gt; where the position of the index of summation is due to the transposition. So, in summation notation, the calculation of &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; equals that from equation (4).&lt;br /&gt;
&lt;br /&gt;
More generally, the &amp;lt;math&amp;gt;\left(i,j\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\sum_{k=1}^{3}a_{ik}b_{kj}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;\left(j,i\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt; But this means that &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; must be the transpose of &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; since the elements in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; are being written in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This &amp;#039;&amp;#039;Product Rule for Transposition&amp;#039;&amp;#039; can be applied again to find the transpose &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;C^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}=\left(B^{T}A^{T}\right)^{T}=\left(A^{T}\right)^{T}\left(B^{T}\right)^{T}=AB=C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Special Types of Matrix =&lt;br /&gt;
&lt;br /&gt;
== The zero matrix ==&lt;br /&gt;
&lt;br /&gt;
The most obvious special type of matrix is one whose elements are all zeros. In typical element notation, the zero matrix is: &amp;lt;math&amp;gt;0=\left\Vert 0\right\Vert .&amp;lt;/math&amp;gt; Since there is no indexing on the elements, it is not obvious what the dimension of this matrix is, Sometimes one writes &amp;lt;math&amp;gt;0_{mn}&amp;lt;/math&amp;gt; to indicate a zero matrix of dimension &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The same ideas apply to vectors whose elements are all zero.&lt;br /&gt;
&lt;br /&gt;
The effect of the zero matrix in any product that is defined is simple: &amp;lt;math&amp;gt;0A=0,\ \ \ \ \ B0=0.&amp;lt;/math&amp;gt; This is easy to check using the across and down rule.&lt;br /&gt;
&lt;br /&gt;
== The identity or unit matrix ==&lt;br /&gt;
&lt;br /&gt;
Vectors of the form:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }2\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }3\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ldots,\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }n\ \text{dimensions}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
are called coordinate vectors. They are often given a characteristic notation, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; dimensions. When arranged as columns of a matrix in the natural order, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; a matrix with a characteristic pattern elements emerges, with a special notation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{2}\\&lt;br /&gt;
\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \mathbf{e}_{3}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{3}\\&lt;br /&gt;
\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \ldots &amp;amp; \mathbf{e}_{n}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;diagonal&amp;#039;&amp;#039; of this matrix is where the 1 elements are located, and every other element is zero.&lt;br /&gt;
&lt;br /&gt;
Consider the effect of &amp;lt;math&amp;gt;I_{2}&amp;lt;/math&amp;gt; on the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; by both pre and post multiplication:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
I_{2}A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\\&lt;br /&gt;
AI_{2} &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
as is easily checked by the across and down rule.&lt;br /&gt;
&lt;br /&gt;
Because any matrix is left unchanged by pre or post multiplication by an appropriately dimensioned &amp;lt;math&amp;gt;I_{n},&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is called an &amp;#039;&amp;#039;identity matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Sometimes it is called a &amp;#039;&amp;#039;unit matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Notice that &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is necessarily a square matrix.&lt;br /&gt;
&lt;br /&gt;
== Diagonal matrices ==&lt;br /&gt;
&lt;br /&gt;
The identity matrix is an example of a diagonal matrix, a matrix whose elements are all zero except for those on the diagonal. Usually diagonal matrices are taken to be square, for example: &amp;lt;math&amp;gt;D=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; They also produce characteristic effects when pre or post multiplying another matrix.&lt;br /&gt;
&lt;br /&gt;
Consider the diagonal matrix: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and the products &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; as defined in the previous section:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; -4\\&lt;br /&gt;
6 &amp;amp; -10&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
BA &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; 4\\&lt;br /&gt;
-6 &amp;amp; -10&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Comparing the results, we can deduce that post multiplication by a diagonal matrix multiplies each column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by the corresponding diagonal element, whereas pre multiplication multiplies each row by the corresponding diagonal element.&lt;br /&gt;
&lt;br /&gt;
== Symmetric matrices ==&lt;br /&gt;
&lt;br /&gt;
Symmetric matrices are matrices having the property that &amp;lt;math&amp;gt;A=A^{T}.&amp;lt;/math&amp;gt; Notice that such matrices must be square, since if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and to have equality of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; they must have the same dimension, so that &amp;lt;math&amp;gt;m=n&amp;lt;/math&amp;gt; is required.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; symmetric matrix, with typical element &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{21} &amp;amp; a_{31}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22} &amp;amp; a_{32}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equality of matrices is defined as equality of all elements. This is fine on the diagonals, since &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; have the same diagonal elements. For the off diagonal elements, we end up with the requirements: &amp;lt;math&amp;gt;a_{12}=a_{21},\ \ \ a_{13}=a_{31},\ \ \ a_{23}=a_{32}&amp;lt;/math&amp;gt; or more generally: &amp;lt;math&amp;gt;a_{ij}=a_{ji}\ \ \ \ \ \text{for}\ i\neq j.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The effect of this conclusion is that in a symmetric matrix, the ’triangle’ of above diagonal elements coincides with the triangle of below diagonal elements. It is as if the upper triangle is folded over the diagonal to become the lower triangle.&lt;br /&gt;
&lt;br /&gt;
A simple example is: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 2\\&lt;br /&gt;
2 &amp;amp; 1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; A more complicated example uses the &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and calculates the &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C^{T}C &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
45 &amp;amp; 27 &amp;amp; -21\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-21 &amp;amp; -11 &amp;amp; 10&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is clearly symmetric.&lt;br /&gt;
&lt;br /&gt;
This illustrates the general proposition that if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix, the product &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is a symmetric &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix. Proof? Compute the transpose of &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; using the product rule for transposition: &amp;lt;math&amp;gt;\left(A^{T}A\right)^{T}=A^{T}\left(A^{T}\right)^{T}=A^{T}A.&amp;lt;/math&amp;gt; Since &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is equal to its transpose, it must be a symmetric matrix. Such symmetric matrices appear frequently in econometrics.&lt;br /&gt;
&lt;br /&gt;
It should be clear that diagonal matrices are symmetric, since all their off-diagonal elements are equal (zero), and thence the identity matrix &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is also symmetric.&lt;br /&gt;
&lt;br /&gt;
== The outer product ==&lt;br /&gt;
&lt;br /&gt;
The inner product of two &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}&amp;lt;/math&amp;gt;, is automatically a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; quantity, a scalar, although it can be interpreted as a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, a matrix with a single element.&lt;br /&gt;
&lt;br /&gt;
Suppose one considered the product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{x}^{T}.&amp;lt;/math&amp;gt; Is this defined? If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; then the product &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times r.&amp;lt;/math&amp;gt; Applying this logic to &amp;lt;math&amp;gt;\mathbf{xx}^{T},&amp;lt;/math&amp;gt; this is &amp;lt;math&amp;gt;\left(n\times1\right)\left(1\times n\right),&amp;lt;/math&amp;gt; so the resulting product &amp;#039;&amp;#039;is &amp;#039;&amp;#039;defined, and is an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt;&amp;#039;&amp;#039; matrix&amp;#039;&amp;#039; - the &amp;#039;&amp;#039;outer product&amp;#039;&amp;#039; of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; the word ’outer’ being used to distinguish from the inner product.&lt;br /&gt;
&lt;br /&gt;
How does the across and down rule work here? Suppose that: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Then: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right].&amp;lt;/math&amp;gt; Here, there is &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in row one of the ’matrix’ &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in column one of the matrix &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; so the across and down rule still works - it is just that there is only one product per row and column combination. So: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{cc}&lt;br /&gt;
36 &amp;amp; 18\\&lt;br /&gt;
18 &amp;amp; 9&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and it is obvious from this that &amp;lt;math&amp;gt;\mathbf{xx}^{T}&amp;lt;/math&amp;gt; is a symmetric matrix.&lt;br /&gt;
&lt;br /&gt;
One can see that this outer product need not be restricted to vectors of the same dimension. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times1,&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{xy}^{T}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
y_{1} &amp;amp; \ldots &amp;amp; y_{m}\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
x_{1}y_{1} &amp;amp; x_{1}y_{2} &amp;amp; \ldots &amp;amp; x_{1}y_{m}\\&lt;br /&gt;
x_{2}y_{1} &amp;amp; x_{2}y_{2} &amp;amp; \ldots &amp;amp; x_{2}y_{m}\\&lt;br /&gt;
\\&lt;br /&gt;
x_{n}y_{1} &amp;amp; x_{n}y_{2} &amp;amp; \ldots &amp;amp; x_{n}y_{m}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;\mathbf{xy}^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and consists of rows which are &amp;lt;math&amp;gt;\mathbf{y}^{T}&amp;lt;/math&amp;gt; multiplied by an element of the &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Another interesting and useful example involves a vector with every element equal to &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Sometimes this is written as &amp;lt;math&amp;gt;\mathbf{1}_{n}&amp;lt;/math&amp;gt; to indicate an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, and is called the &amp;#039;&amp;#039;sum vector&amp;#039;&amp;#039;. Why? Consider the impact of &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; on the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; used above: &amp;lt;math&amp;gt;\mathbf{1}_{2}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=9,&amp;lt;/math&amp;gt; i.e. an inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with the sum vector is the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; Dividing through by the number of elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; produces the average of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; - i.e. the ’sample mean’ of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The outer product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; is also interesting:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{1}_{2}\mathbf{x}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
6 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x1}_{2}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 6\\&lt;br /&gt;
3 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
showing that pre multiplication of an &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as rows of the product, whilst post multiplication of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}^{T}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as the columns of the product.&lt;br /&gt;
&lt;br /&gt;
Finally: &amp;lt;math&amp;gt;\mathbf{1}_{n}\mathbf{1}_{n}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix with every element equal to &amp;lt;math&amp;gt;1.&amp;lt;/math&amp;gt; This type of matrix also appears in econometrics!&lt;br /&gt;
&lt;br /&gt;
== Triangular matrices ==&lt;br /&gt;
&lt;br /&gt;
A square &amp;#039;&amp;#039;lower triangular &amp;#039;&amp;#039;matrix has all elements above the main diagonal equal to zero, whilst a square &amp;#039;&amp;#039;upper triangular &amp;#039;&amp;#039;matrix has all elements below the main diagonal equal to zero. A simple example of a lower triangular matrix is: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; 0\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Clearly, for this matrix, &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an upper triangular matrix.&lt;br /&gt;
&lt;br /&gt;
One can adapt the definition to rectangular matrices: for example, if two arbitrary rows are added to &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; so that it becomes &amp;lt;math&amp;gt;5\times3,&amp;lt;/math&amp;gt; it would still be considered lower triangular. Equally, if, for example, the third column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; above is removed, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is still considered lower triangular.&lt;br /&gt;
&lt;br /&gt;
Often, we use &amp;#039;&amp;#039;unit &amp;#039;&amp;#039;triangular matrices, where the diagonal elements are all equal to &amp;lt;math&amp;gt;1:&amp;lt;/math&amp;gt; e.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 1\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right].\label{eq:lt_matrix}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Partitioned matrices ==&lt;br /&gt;
&lt;br /&gt;
Sometimes, especially with big matrices, it is useful to organise the elements of the matrix into components which are themselves matrices, for example: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; 3 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 7 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 6 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; Here it would be reasonable to write: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
B_{11} &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; B_{22}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;B_{ii},i=1,2,&amp;lt;/math&amp;gt; represent &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrices. &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is an example of a &amp;#039;&amp;#039;partitioned matrix&amp;#039;&amp;#039;: that is, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; say: &amp;lt;math&amp;gt;A=\left\Vert a_{ij}\right\Vert ,&amp;lt;/math&amp;gt; where the elements of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; are organised into &amp;#039;&amp;#039;sub-matrices&amp;#039;&amp;#039;. An example might be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right],\label{eq:partition_a}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;#039;&amp;#039;sub - matrices&amp;#039;&amp;#039; in the first row block have &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; rows, and therefore &amp;lt;math&amp;gt;m-r&amp;lt;/math&amp;gt; rows in the second row block. The column blocks might be defined by (for example) 3 columns in the first column block, 4 in the second and &amp;lt;math&amp;gt;n-7&amp;lt;/math&amp;gt; in the third column block.&lt;br /&gt;
&lt;br /&gt;
Another simple example might be: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{1} &amp;amp; A_{2} &amp;amp; A_{3}\end{array}\right],\ \ \ \ \ \mathbf{x=}\left[\begin{array}{c}&lt;br /&gt;
\mathbf{x}_{1}\\&lt;br /&gt;
\mathbf{x}_{2}\\&lt;br /&gt;
\mathbf{x}_{3}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and therefore &amp;lt;math&amp;gt;A_{1},A_{2},A_{3}&amp;lt;/math&amp;gt; have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows, &amp;lt;math&amp;gt;A_{1}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{2}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{3}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; columns. The &amp;#039;&amp;#039;subvectors&amp;#039;&amp;#039; in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n_{1},n_{2}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; rows respectively, for the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; to exist.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;n_{1}+n_{2}+n_{3}=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\sum_{i=1}^{n}a_{ij}x_{j}&amp;lt;/math&amp;gt; but the summation can be broken up into the first &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=1}^{n_{1}}a_{ij}x_{j},&amp;lt;/math&amp;gt; the next &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=n_{1}+1}^{n_{1}+n_{2}}a_{ij}x_{j},&amp;lt;/math&amp;gt; and the next &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; terms; &amp;lt;math&amp;gt;\sum_{i=n_{1}+n_{2}+1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The point about the use of partitioned matrices is that the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; can be represented as: &amp;lt;math&amp;gt;A\mathbf{x}=A_{1}\mathbf{x}_{1}+A_{2}\mathbf{x}_{2}+A\mathbf{x}_{3}&amp;lt;/math&amp;gt; by applying the across and down rule to the submatrices and the subvectors, a much simpler representation than the use of summations.&lt;br /&gt;
&lt;br /&gt;
Each of the components is a conformable matrix-vector product: this is essential in any use of partitioned matrices to represent some matrix product. For example, using &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; from equation (6) and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;B=\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is easy to write:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
A_{11}B_{11}+A_{12}B_{21}+A_{13}B_{31}\\&lt;br /&gt;
A_{21}B_{11}+A_{22}B_{21}+A_{23}B_{31}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
But, what are the row dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt; What are the possible column dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Matrices, vectors and econometrics =&lt;br /&gt;
&lt;br /&gt;
The data on weights and heights for 12 students in the data matrix: &amp;lt;math&amp;gt;D=\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; would seem to be ideally suited for fitting a two variable regression model: &amp;lt;math&amp;gt;y_{i}=\alpha+\beta x_{i}+u_{i},\;\;\;\;\; i=1,...,12.&amp;lt;/math&amp;gt; Here, the first column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the weight data, the data on the dependent variable &amp;lt;math&amp;gt;y_{i},&amp;lt;/math&amp;gt; and so should be labelled &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; The second column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the data on the explanatory variable height, in the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; say, so that: &amp;lt;math&amp;gt;D=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{y} &amp;amp; \mathbf{x}\end{array}\right].&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we define a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector with every element &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}_{12}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{u}&amp;lt;/math&amp;gt; to contain the error terms; &amp;lt;math&amp;gt;\mathbf{u}=\left[\begin{array}{c}&lt;br /&gt;
u_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
u_{12}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; the regression model can be written in terms of the three data vectors &amp;lt;math&amp;gt;\mathbf{y,1}_{12}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\mathbf{y}=\mathbf{1}_{12}\alpha+\mathbf{x}\beta+\mathbf{u.}&amp;lt;/math&amp;gt; To see this, think of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th elements of the vectors on the left and right hand sides.&lt;br /&gt;
&lt;br /&gt;
The standard next step is then to combine the data vectors for the explanatory variables into a matrix: &amp;lt;math&amp;gt;X=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{1}_{12} &amp;amp; \mathbf{x}\end{array}\right],&amp;lt;/math&amp;gt; and then define a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\boldsymbol{\delta}&amp;lt;/math&amp;gt; to contain the parameters &amp;lt;math&amp;gt;\alpha,\beta&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\boldsymbol{\delta}=\left[\begin{array}{r}&lt;br /&gt;
\alpha\\&lt;br /&gt;
\beta&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; to give the data matrix representation of the regression model as: &amp;lt;math&amp;gt;\mathbf{y}=X\boldsymbol{\delta}+\mathbf{u.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the purposes of developing the theory of regression, this is the most convenient form of the regression model. It can represent regression models with any number of explanatory variables, and thus any number of parameters. The obvious point is that a knowledge of vector and matrix operations is needed to use and understand this form.&lt;br /&gt;
&lt;br /&gt;
We shall see later that there are two particular matrix and vector quantities associated with a regression model. The first is the matrix &amp;lt;math&amp;gt;X^{T}X,&amp;lt;/math&amp;gt; and the second the vector &amp;lt;math&amp;gt;X^{T}\mathbf{y.}&amp;lt;/math&amp;gt; The following Matlab code snippet provides the numerical values of these quantities for the weight data:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; dset = load(’weights.mat’); &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xtx = dset.X’ * dset.X; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xty = dset.X’ * dset.y; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xtx) &lt;br /&gt;
&lt;br /&gt;
 12     802&lt;br /&gt;
&lt;br /&gt;
802   53792&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xty)&lt;br /&gt;
&lt;br /&gt;
  1850&lt;br /&gt;
&lt;br /&gt;
124528&lt;br /&gt;
&lt;br /&gt;
Hand calculation is of course possible, but not recommended.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Lnotes&amp;diff=3043</id>
		<title>Lnotes</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Lnotes&amp;diff=3043"/>
				<updated>2013-09-10T15:02:55Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Matrices =&lt;br /&gt;
&lt;br /&gt;
In the PreSession Maths course, a matrix was defined as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;A matrix is a rectangular array of numbers enclosed in parentheses, conventionally denoted by a capital letter. The number of rows (say &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt;) and&lt;br /&gt;
&lt;br /&gt;
the number of columns (say &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;) determine the order of the matrix (&amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\times&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;).&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
Two examples were given:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
P &amp;amp; =\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 3 &amp;amp; 4\\&lt;br /&gt;
3 &amp;amp; 1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ Q=\left[\begin{array}{rr}&lt;br /&gt;
2 &amp;amp; 3\\&lt;br /&gt;
4 &amp;amp; 3\\&lt;br /&gt;
1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
matrices of dimensions &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;3\times2&amp;lt;/math&amp;gt; respectively.&lt;br /&gt;
&lt;br /&gt;
Why study matrices for econometrics? Basically because a data set of several variables, e.g. on the weights and heights of 12 students, can be thought of as a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
D &amp;amp; =\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The properties of matrices can then be used to facilitate answering all the usual questions of econometrics - list not given here!&lt;br /&gt;
&lt;br /&gt;
Calculations with matrices with explicit numerical elements, as in the examples above is called matrix &amp;#039;&amp;#039;arithmetic&amp;#039;&amp;#039;. Matrix &amp;#039;&amp;#039;algebra&amp;#039;&amp;#039; is the algebra of matrices where the elements are not made explicit: this is what is really required for econometrics, as we shall see.&lt;br /&gt;
&lt;br /&gt;
As an example of this, a &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix might be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{ccc}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and would equal &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; above if the collection of &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; were given appropriate numerical values.&lt;br /&gt;
&lt;br /&gt;
A general &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is also a &amp;#039;&amp;#039;typical element &amp;#039;&amp;#039;notation for matrices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left\Vert a_{ij}\right\Vert ,\ \ \ \ \ i=1,...,m,j=1,...,n,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; is the element at the intersection of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row and &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th column in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;m\neq n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;#039;&amp;#039;rectangular &amp;#039;&amp;#039;matrix; when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a square matrix, having the same number of rows or columns.&lt;br /&gt;
&lt;br /&gt;
== Rows, columns and vectors ==&lt;br /&gt;
&lt;br /&gt;
Clearly, there is no reason why &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; cannot equal 1: so, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix with &amp;lt;math&amp;gt;n=1,&amp;lt;/math&amp;gt; i.e. with one column, is usually called a column vector. Similarly, a matrix with one row is a row vector.&lt;br /&gt;
&lt;br /&gt;
There are a lot of advantages to thinking of matrices as collections of row or column vectors, as we shall see. As an example, define the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; column vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\mathbf{,\ \ \ b}=\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and arrange as the columns of the &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\label{eq:axy}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, a column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; elements can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What happens when both &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; are equal to &amp;lt;math&amp;gt;1?&amp;lt;/math&amp;gt; Then, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, but it is also considered to be a real number, or &amp;#039;&amp;#039;scalar&amp;#039;&amp;#039; in the language of linear algebra:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[a_{11}\right]=a_{11}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is perhaps a little odd, but turns out to be a useful convention in a number of situations.&lt;br /&gt;
&lt;br /&gt;
== Transposition of vectors ==&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; in equation (1) can be seen as elements of column vectors, say:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],\ \ \ \boldsymbol{d}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This representation of row vectors as column vectors is a bit clumsy, so some transformation which converts a column vector into a row vector, and vice versa would be useful. The process of converting a column vector &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; into a row vector is called &amp;#039;&amp;#039;transposition, &amp;#039;&amp;#039;and the transposed version of &amp;lt;math&amp;gt;\mathbf{c}&amp;lt;/math&amp;gt; is denoted:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c}^{T} &amp;amp; =\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; superscript denoting transposition. In practice, a prime, &amp;lt;math&amp;gt;^{\prime},&amp;lt;/math&amp;gt; is used instead of &amp;lt;math&amp;gt;^{T}.&amp;lt;/math&amp;gt; However, whilst the prime is much simpler to write than the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; sign, it is also much easier to lose track of in writing out long or complicated expressions. So, it is best initially to use &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; to denote transposition rather than the prime &amp;lt;math&amp;gt;^{\prime}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can then be written via its rows as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
\mathbf{c}^{T}\\&lt;br /&gt;
\boldsymbol{d}^{T}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The same ideas can be applied to the matrices &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Q.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Operations with matrices =&lt;br /&gt;
&lt;br /&gt;
== Addition, subtraction and scalar multiplication ==&lt;br /&gt;
&lt;br /&gt;
For vectors, addition and subtraction are defined only for vectors of the same dimensions. If:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
y_{n}&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x+y} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}+y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}+y_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{x-y}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}-y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}-y_{n}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clearly, the addition or subtraction operation is &amp;#039;&amp;#039;elementwise. &amp;#039;&amp;#039;If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; have different dimensions, there will be some elements left over once all the elements of the smaller dimensioned vector have been used up.&lt;br /&gt;
&lt;br /&gt;
Another operation is &amp;#039;&amp;#039;scalar multiplication&amp;#039;&amp;#039;: if &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; is a real number or scalar, the product &amp;lt;math&amp;gt;\lambda\mathbf{x}&amp;lt;/math&amp;gt; is defined as: &amp;lt;math&amp;gt;\lambda\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that every element of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is multiplied by the same scalar &amp;lt;math&amp;gt;\lambda.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The two types of operation can be combined into the &amp;#039;&amp;#039;linear combination&amp;#039;&amp;#039; of vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right]+\left[\begin{array}{c}&lt;br /&gt;
\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mu y_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}+\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}+\mu y_{n}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equally, one can define the linear combination of vectors &amp;lt;math&amp;gt;\mathbf{x,y,}\ldots,\mathbf{z}&amp;lt;/math&amp;gt; by scalars &amp;lt;math&amp;gt;\lambda,\mu,\ldots,\nu&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}+\ldots+\nu\mathbf{z}&amp;lt;/math&amp;gt; with typical element: &amp;lt;math&amp;gt;\lambda x_{i}+\mu y_{i}+\ldots+\nu z_{i},&amp;lt;/math&amp;gt; provided that all the vectors have the same dimension.&lt;br /&gt;
&lt;br /&gt;
For matrices, these ideas carry over immediately: apply to each column of the matrices involved. For example, if &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{b}_{n}\end{array}\right],&amp;lt;/math&amp;gt; both &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; then addition and subtraction are defined elementwise, as for vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A+B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}+\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}+\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}+b_{ij}\right\Vert ,\\&lt;br /&gt;
A-B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}-\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}-\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}-b_{ij}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scalar multiplication of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; involves multiplying every column vector of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda,&amp;lt;/math&amp;gt; and therefore multiplying every element of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda A=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}\right\Vert .&amp;lt;/math&amp;gt; With the same idea for &amp;lt;math&amp;gt;B,&amp;lt;/math&amp;gt; the linear combination of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mu&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\lambda A+\mu B=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1}+\mu\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}+\mu\mathbf{b}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}+\mu b_{ij}\right\Vert .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, consider the matrices: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\lambda=1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mu=-2:&amp;lt;/math&amp;gt; then:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\lambda A+\mu B &amp;amp; = &amp;amp; A-2B\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
4 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; 7&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - vector products ==&lt;br /&gt;
&lt;br /&gt;
=== Inner product ===&lt;br /&gt;
&lt;br /&gt;
The simplest form of a matrix vector product is the case where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; consists of one row, so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;1\times n&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\mathbf{a}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right].&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the product &amp;lt;math&amp;gt;A\mathbf{x}=\mathbf{a}^{T}\mathbf{x}&amp;lt;/math&amp;gt; is called the &amp;#039;&amp;#039;inner product&amp;#039;&amp;#039; and is defined as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a}^{T}\mathbf{x} &amp;amp; =a_{1}x_{1}+\ldots+a_{n}x_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that the definition amounts to multiplying corresponding elements in &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and adding up the resultant products. Writing: &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x=}\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=a_{1}x_{1}+\ldots+a_{n}x_{n}&amp;lt;/math&amp;gt; motivates the familiar description of the &amp;#039;&amp;#039;across and down rule &amp;#039;&amp;#039;for this product: &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; is the ’multiply corresponding elements’ part of the definition.&lt;br /&gt;
&lt;br /&gt;
Notice that the result of the inner product is a real number, for example: &amp;lt;math&amp;gt;\mathbf{c}^{T}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{c}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=36+6=42.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, in the product &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x,}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have the same number of elements, &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; say, for the product to be defined. If &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; had different numbers of elements, there would be some elements of &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; left over or not used in the product: e.g.: &amp;lt;math&amp;gt;\mathbf{b}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{x=}\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; When the inner product of two vectors is defined, the vectors are said to be &amp;#039;&amp;#039;conformable&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Orthogonality ==&lt;br /&gt;
&lt;br /&gt;
Two vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; with the property that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0&amp;lt;/math&amp;gt; are said to be orthogonal to each other. For example, if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
-1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is clear that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0.&amp;lt;/math&amp;gt; This seems a rather innocuous definition, and yet the idea of orthogonality turns out to be extremely important in econometrics.&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; are thought of as points in &amp;lt;math&amp;gt;R^{2},&amp;lt;/math&amp;gt; and arrows are drawn from the origin to &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and to &amp;lt;math&amp;gt;\mathbf{y,}&amp;lt;/math&amp;gt; then the two arrows are perpendicular to each other - see Figure 1. If &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; were defined as: &amp;lt;math&amp;gt;\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the position of the &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; vector and the corresponding arrow would change, but the perpendicularity property would still hold.&lt;br /&gt;
&lt;br /&gt;
Figure 1&lt;br /&gt;
&lt;br /&gt;
[[File:orthy_example.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Matrix - vector products ===&lt;br /&gt;
&lt;br /&gt;
Since the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; has two rows, now denoted &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{1}^{T}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{2}^{T},&amp;lt;/math&amp;gt; there are two possible inner products with the vector:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]:\\&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x} &amp;amp; = &amp;amp; 42,\ \ \ \ \ \boldsymbol{\alpha}_{2}^{T}\mathbf{x}=33.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assembling the two inner product values into a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector defines the product of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; with the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x}\\&lt;br /&gt;
\boldsymbol{\alpha}_{2}^{T}\mathbf{x}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Focussing only on the part: &amp;lt;math&amp;gt;\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; one can see that each element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is obtained from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; argument.&lt;br /&gt;
&lt;br /&gt;
Sometimes this product is described as forming a &amp;#039;&amp;#039;linear combination &amp;#039;&amp;#039;of the columns of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; using the scalar elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=6\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]+3\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; More generally, if:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right],\ \ \ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
\lambda\\&lt;br /&gt;
\mu&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
A\mathbf{x} &amp;amp; = &amp;amp; \lambda\mathbf{a}+\mu\mathbf{b.}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The general version of these ideas for an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \mathbf{a}_{2} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right].&amp;lt;/math&amp;gt; is straightforward. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, then the vector &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is, by the &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
a_{11}x_{1}+\ldots+a_{1n}x_{n}\\&lt;br /&gt;
a_{21}x_{1}+\ldots+a_{2n}x_{n}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{m1}x_{1}+\ldots+a_{mn}x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{1j}x_{j}\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{2j}x_{j}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{mj}x_{j}&lt;br /&gt;
\end{array}\right],\label{eq:ab}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that the typical element, the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th, is &amp;lt;math&amp;gt;\sum\limits _{j=1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt; Equally, &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is the linear combination &amp;lt;math&amp;gt;\mathbf{a}_{1}x_{1}+\ldots+\mathbf{a}_{n}x_{n}&amp;lt;/math&amp;gt; of the columns of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - matrix products ==&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{a}_{1},\ldots,\mathbf{a}_{n},&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{b}_{1},\ldots,\mathbf{b}_{r}.&amp;lt;/math&amp;gt; Clearly, each product &amp;lt;math&amp;gt;A\mathbf{b}_{1},...,A\mathbf{b}_{r}&amp;lt;/math&amp;gt; exists, and is &amp;lt;math&amp;gt;m\times1.&amp;lt;/math&amp;gt; These products can be arranged as the columns of a matrix as &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]&amp;lt;/math&amp;gt; and this matrix is &amp;#039;&amp;#039;defined&amp;#039;&amp;#039; to be the product &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; of the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]=AB.&amp;lt;/math&amp;gt; By construction, this must be an &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix, since each column is &amp;lt;math&amp;gt;m\times1&amp;lt;/math&amp;gt; and there are &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; columns.&lt;br /&gt;
&lt;br /&gt;
This is not the usual presentation of the definition of the product of two matrices, which relies on the &amp;#039;&amp;#039;across and down rule&amp;#039;&amp;#039; mentioned earlier, and focusses on the elements of each matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt; Set:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \mathbf{b}_{2} &amp;amp; \ldots &amp;amp; \mathbf{b}_{r}\end{array}\right]\ \ \ \ \ \ \ \text{(by columns)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert b_{ik}\right\Vert ,\ \ \ \ \ i=1,...,n,k=1,...,r\ \ \ \ \ \ \ \text{(typical element)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \ \ \text{(the array)}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What does the typical element of the &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; look like? Start with the &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; which is &amp;lt;math&amp;gt;A\mathbf{b}_{k}.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element in &amp;lt;math&amp;gt;A\mathbf{b}_{k}&amp;lt;/math&amp;gt; is, from equation (2), the inner product of the elements of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\end{array}\right],&amp;lt;/math&amp;gt; with the elements of &amp;lt;math&amp;gt;\mathbf{b}_{k},&amp;lt;/math&amp;gt; so that the inner product is: &amp;lt;math&amp;gt;a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, the &amp;lt;math&amp;gt;ik&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;c_{ik}=a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt; We can see this arising from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; calculation by writing:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\label{eq:c_ab}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1k} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2k} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nk} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert \sum_{j=1}^{n}a_{ij}b_{jk}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These ideas are simple, but a little tedious. Numerical examples are equally tedious! As an example, using: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; we can find the matrix &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; such that&lt;br /&gt;
&lt;br /&gt;
# the first column of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; adds together the columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the second column is the difference of the first and second columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the third column is &amp;lt;math&amp;gt;2\times&amp;lt;/math&amp;gt; the first column of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the fourth column is zero.&lt;br /&gt;
&lt;br /&gt;
It is easy to check that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cccc}&lt;br /&gt;
8 &amp;amp; 4 &amp;amp; 12 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; -2 &amp;amp; 6 &amp;amp; 0&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Arithmetic calculations of matrix products almost always use the elementwise across and down formula. However, there are many situations in econometrics where algebraic rather than arithmetic arguments are required. In these cases, the viewpoint of matrix multiplication as linear combinations of columns is much more powerful.&lt;br /&gt;
&lt;br /&gt;
Clearly one can give many more examples of different dimensions and complexities - but the same basic rules apply. To multiply two matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; together, the number of columns in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; must match the number of rows in &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; - this is &amp;#039;&amp;#039;conformability&amp;#039;&amp;#039; in action again. The resulting product will have number of rows equal to the number in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and number of columns equal to the number in &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this conformability rule does not hold, then the product of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is not defined.&lt;br /&gt;
&lt;br /&gt;
== Matlab ==&lt;br /&gt;
&lt;br /&gt;
One should also say that as the dimensions of the matrices increases, so the tediousness of the calculations increase. The solution to this for numerical calculation is to appeal to the computer. Programs like Matlab and Excel (and a number of others, some of them free) resolve this difficulty easily.&lt;br /&gt;
&lt;br /&gt;
In Matlab, symbols for row or column vectors do not need any particular differentiation: they are distinguished by how they are defined. For example, the following Matlab commands define &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;as a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; as a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector, then display the contents of these variables, and do a calculation:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec = [1 2 3 4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec = [1;2;3;4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec&lt;br /&gt;
&lt;br /&gt;
rowvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec&lt;br /&gt;
&lt;br /&gt;
colvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec*colvec&lt;br /&gt;
&lt;br /&gt;
ans =&lt;br /&gt;
&lt;br /&gt;
30 &lt;br /&gt;
&lt;br /&gt;
So, the semi-colon indicates the end of a row in a matrix or vector; it can be replaced by a carriage return. Notice the difference in how a row vector and a column vector is defined. One can see that the product &amp;lt;code&amp;gt;rowvec*colvec&amp;lt;/code&amp;gt; is well defined, just because &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Matlab also allows elementwise multiplication of two vectors using the &amp;lt;math&amp;gt;\centerdot\ast&amp;lt;/math&amp;gt; operator: if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
y_{2}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{x}\centerdot\ast\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}y_{1}\\&lt;br /&gt;
x_{2}y_{2}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and one can see that the inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; can be obtained as the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; In Matlab, this would be obtained as: &amp;lt;math&amp;gt;\text{sum}\left(\mathbf{x}\centerdot\ast\mathbf{y}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the example above, this calculation fails since &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; sum(rowvec .* colvec) ??? &lt;br /&gt;
&lt;br /&gt;
Error using ==&amp;amp;gt; times Matrix dimensions must agree. &lt;br /&gt;
&lt;br /&gt;
For this to work, &amp;lt;code&amp;gt;rowvec&amp;lt;/code&amp;gt; would have to be transposed as &amp;lt;code&amp;gt;rowvec’&amp;lt;/code&amp;gt;, so that transposition in Matlab is very natural.&lt;br /&gt;
&lt;br /&gt;
Allowing for such difficulties, matrix multiplication in Matlab is very simple:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; A = [6 2; 3 5];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; B = [1 1 2 0;1 -1 0 0];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = A * B; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
 8 4 1&lt;br /&gt;
&lt;br /&gt;
 2 0 8 &lt;br /&gt;
&lt;br /&gt;
-2 6 0 &lt;br /&gt;
&lt;br /&gt;
Notice how the matrices are defined here through their rows. The &amp;lt;code&amp;gt;disp() &amp;lt;/code&amp;gt;command displays the contents of the object referred to.&lt;br /&gt;
&lt;br /&gt;
It is less natural in Matlab to define matrices by columns - a typical example of how mathematics and computing have conflicts of notation. However, once columns &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}&amp;lt;/math&amp;gt; have been defined, the concatenation operation &amp;lt;math&amp;gt;\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]&amp;lt;/math&amp;gt; collects the columns into a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; a = [6;2]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; b = [3;5]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = [a b]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
6 3 &lt;br /&gt;
&lt;br /&gt;
2 5 &lt;br /&gt;
&lt;br /&gt;
Notice that the &amp;lt;code&amp;gt;disp(C)&amp;lt;/code&amp;gt; command does not label the result that is printed out. Simply typing &amp;lt;code&amp;gt;C&amp;lt;/code&amp;gt; would preface the output by &amp;lt;code&amp;gt;C =&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Pre and Post Multiplication ==&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; as above, say that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;pre-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; and that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;post-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This distinction between &amp;#039;&amp;#039;pre &amp;#039;&amp;#039;and &amp;#039;&amp;#039;post &amp;#039;&amp;#039;multiplication is important, in the following sense. Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are matrices such that the products &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined. If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; rows for &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; to be defined. For &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; to be defined, &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; columns to match the &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when both products are defined, there is no reason for the two products coincide. The first thing to notice is that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;m\times m,&amp;lt;/math&amp;gt; matrix, whilst &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; matrix. Different sized matrices cannot be equal. To illustrate, use the matrices: &amp;lt;math&amp;gt;B_{2}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right],\ \ \ C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]:&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B_{2}C &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrr}&lt;br /&gt;
27 &amp;amp; -3 &amp;amp; -15\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-15 &amp;amp; -1 &amp;amp; 8&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
CB_{2} &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
49 &amp;amp; -11\\&lt;br /&gt;
31 &amp;amp; 15&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; matrices, the products can differ: for example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
8 &amp;amp; 4\\&lt;br /&gt;
8 &amp;amp; -2&lt;br /&gt;
\end{array}\right],\ \ \ \ \ BA=\left[\begin{array}{cc}&lt;br /&gt;
9 &amp;amp; 7\\&lt;br /&gt;
3 &amp;amp; -3&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In cases where &amp;lt;math&amp;gt;AB=BA,&amp;lt;/math&amp;gt; the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are said to &amp;#039;&amp;#039;commute&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Transposition ==&lt;br /&gt;
&lt;br /&gt;
A column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; can be converted to a row vector &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by transposition: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ \mathbf{x}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
x_{1} &amp;amp; \ldots &amp;amp; x_{n}\end{array}\right].&amp;lt;/math&amp;gt; Transposing &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;\left(\mathbf{x}^{T}\right)^{T}&amp;lt;/math&amp;gt; reproduces the original vector &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; How do these ideas carry over to matrices?&lt;br /&gt;
&lt;br /&gt;
If the &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right],&amp;lt;/math&amp;gt; the transpose of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; is defined as the matrix whose &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; are &amp;lt;math&amp;gt;\mathbf{a}_{i}^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{c}&lt;br /&gt;
\mathbf{a}_{1}^{T}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mathbf{a}_{n}^{T}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; In terms of elements, if: &amp;lt;math&amp;gt;\mathbf{a}_{i}=\left[\begin{array}{c}&lt;br /&gt;
a_{1i}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{ni}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ A^{T}=\left[\begin{array}{rrrrr}&lt;br /&gt;
a_{11} &amp;amp; \ldots &amp;amp; a_{i1} &amp;amp; \ldots &amp;amp; a_{m1}\\&lt;br /&gt;
a_{12} &amp;amp; \ldots &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{m2}\\&lt;br /&gt;
\vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{1n} &amp;amp; \ldots &amp;amp; a_{in} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; One can see that the first column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; has now become the first row of &amp;lt;math&amp;gt;A^{T}.&amp;lt;/math&amp;gt; Notice too that &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times m&amp;lt;/math&amp;gt; matrix if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix.&lt;br /&gt;
&lt;br /&gt;
Transposing &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; takes the first column of &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; and writes it as a row, which coincides with the first row of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; The same argument applies to the other columns of &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\left(A^{T}\right)^{T}=A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== The product rule for transposition ===&lt;br /&gt;
&lt;br /&gt;
This states that if &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;C^{T}=B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How to see this? Consider the following example: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; b_{13} &amp;amp; b_{14}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; b_{23} &amp;amp; b_{24}\\&lt;br /&gt;
b_{31} &amp;amp; b_{32} &amp;amp; b_{33} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;c_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=\sum_{k=1}^{3}a_{2k}b_{k3}.\label{eq:c23}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that: &amp;lt;math&amp;gt;B^{T}A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
b_{11} &amp;amp; b_{21} &amp;amp; b_{31}\\&lt;br /&gt;
b_{12} &amp;amp; b_{22} &amp;amp; b_{32}\\&lt;br /&gt;
b_{13} &amp;amp; b_{23} &amp;amp; b_{33}\\&lt;br /&gt;
b_{14} &amp;amp; b_{24} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
a_{11} &amp;amp; a_{21}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that the &amp;lt;math&amp;gt;\left(3,2\right)&amp;lt;/math&amp;gt; element of this product is actually &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;b_{13}a_{21}+b_{23}a_{22}+b_{33}a_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=c_{23}.&amp;lt;/math&amp;gt; In summation notation, we see that from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;c_{23}=\sum_{k=1}^{3}b_{k3}a_{2k},&amp;lt;/math&amp;gt; where the position of the index of summation is due to the transposition. So, in summation notation, the calculation of &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; equals that from equation (6).&lt;br /&gt;
&lt;br /&gt;
More generally, the &amp;lt;math&amp;gt;\left(i,j\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\sum_{k=1}^{3}a_{ik}b_{kj}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;\left(j,i\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt; But this means that &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; must be the transpose of &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; since the elements in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; are being written in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This &amp;#039;&amp;#039;Product Rule for Transposition&amp;#039;&amp;#039; can be applied again to find the transpose &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;C^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}=\left(B^{T}A^{T}\right)^{T}=\left(A^{T}\right)^{T}\left(B^{T}\right)^{T}=AB=C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Special Types of Matrix =&lt;br /&gt;
&lt;br /&gt;
== The zero matrix ==&lt;br /&gt;
&lt;br /&gt;
The most obvious special type of matrix is one whose elements are all zeros. In typical element notation, the zero matrix is: &amp;lt;math&amp;gt;0=\left\Vert 0\right\Vert .&amp;lt;/math&amp;gt; Since there is no indexing on the elements, it is not obvious what the dimension of this matrix is, Sometimes one writes &amp;lt;math&amp;gt;0_{mn}&amp;lt;/math&amp;gt; to indicate a zero matrix of dimension &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The same ideas apply to vectors whose elements are all zero.&lt;br /&gt;
&lt;br /&gt;
The effect of the zero matrix in any product that is defined is simple: &amp;lt;math&amp;gt;0A=0,\ \ \ \ \ B0=0.&amp;lt;/math&amp;gt; This is easy to check using the across and down rule.&lt;br /&gt;
&lt;br /&gt;
== The identity or unit matrix ==&lt;br /&gt;
&lt;br /&gt;
Vectors of the form:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }2\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }3\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ldots,\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }n\ \text{dimensions}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
are called coordinate vectors. They are often given a characteristic notation, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; dimensions. When arranged as columns of a matrix in the natural order, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; a matrix with a characteristic pattern elements emerges, with a special notation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{2}\\&lt;br /&gt;
\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \mathbf{e}_{3}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{3}\\&lt;br /&gt;
\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \ldots &amp;amp; \mathbf{e}_{n}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;diagonal&amp;#039;&amp;#039; of this matrix is where the 1 elements are located, and every other element is zero.&lt;br /&gt;
&lt;br /&gt;
Consider the effect of &amp;lt;math&amp;gt;I_{2}&amp;lt;/math&amp;gt; on the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; by both pre and post multiplication:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
I_{2}A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\\&lt;br /&gt;
AI_{2} &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
as is easily checked by the across and down rule.&lt;br /&gt;
&lt;br /&gt;
Because any matrix is left unchanged by pre or post multiplication by an appropriately dimensioned &amp;lt;math&amp;gt;I_{n},&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is called an &amp;#039;&amp;#039;identity matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Sometimes it is called a &amp;#039;&amp;#039;unit matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Notice that &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is necessarily a square matrix.&lt;br /&gt;
&lt;br /&gt;
== Diagonal matrices ==&lt;br /&gt;
&lt;br /&gt;
The identity matrix is an example of a diagonal matrix, a matrix whose elements are all zero except for those on the diagonal. Usually diagonal matrices are taken to be square, for example: &amp;lt;math&amp;gt;D=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; They also produce characteristic effects when pre or post multiplying another matrix.&lt;br /&gt;
&lt;br /&gt;
Consider the diagonal matrix: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and the products &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; as defined in the previous section:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; -4\\&lt;br /&gt;
6 &amp;amp; -10&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
BA &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; 4\\&lt;br /&gt;
-6 &amp;amp; -10&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Comparing the results, we can deduce that post multiplication by a diagonal matrix multiplies each column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by the corresponding diagonal element, whereas pre multiplication multiplies each row by the corresponding diagonal element.&lt;br /&gt;
&lt;br /&gt;
== Symmetric matrices ==&lt;br /&gt;
&lt;br /&gt;
Symmetric matrices are matrices having the property that &amp;lt;math&amp;gt;A=A^{T}.&amp;lt;/math&amp;gt; Notice that such matrices must be square, since if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and to have equality of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; they must have the same dimension, so that &amp;lt;math&amp;gt;m=n&amp;lt;/math&amp;gt; is required.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; symmetric matrix, with typical element &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{21} &amp;amp; a_{31}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22} &amp;amp; a_{32}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equality of matrices is defined as equality of all elements. This is fine on the diagonals, since &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; have the same diagonal elements. For the off diagonal elements, we end up with the requirements: &amp;lt;math&amp;gt;a_{12}=a_{21},\ \ \ a_{13}=a_{31},\ \ \ a_{23}=a_{32}&amp;lt;/math&amp;gt; or more generally: &amp;lt;math&amp;gt;a_{ij}=a_{ji}\ \ \ \ \ \text{for}\ i\neq j.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The effect of this conclusion is that in a symmetric matrix, the ’triangle’ of above diagonal elements coincides with the triangle of below diagonal elements. It is as if the upper triangle is folded over the diagonal to become the lower triangle.&lt;br /&gt;
&lt;br /&gt;
A simple example is: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 2\\&lt;br /&gt;
2 &amp;amp; 1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; A more complicated example uses the &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and calculates the &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C^{T}C &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
45 &amp;amp; 27 &amp;amp; -21\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-21 &amp;amp; -11 &amp;amp; 10&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is clearly symmetric.&lt;br /&gt;
&lt;br /&gt;
This illustrates the general proposition that if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix, the product &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is a symmetric &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix. Proof? Compute the transpose of &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; using the product rule for transposition: &amp;lt;math&amp;gt;\left(A^{T}A\right)^{T}=A^{T}\left(A^{T}\right)^{T}=A^{T}A.&amp;lt;/math&amp;gt; Since &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is equal to its transpose, it must be a symmetric matrix. Such symmetric matrices appear frequently in econometrics.&lt;br /&gt;
&lt;br /&gt;
It should be clear that diagonal matrices are symmetric, since all their off-diagonal elements are equal (zero), and thence the identity matrix &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is also symmetric.&lt;br /&gt;
&lt;br /&gt;
== The outer product ==&lt;br /&gt;
&lt;br /&gt;
The inner product of two &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}&amp;lt;/math&amp;gt;, is automatically a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; quantity, a scalar, although it can be interpreted as a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, a matrix with a single element.&lt;br /&gt;
&lt;br /&gt;
Suppose one considered the product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{x}^{T}.&amp;lt;/math&amp;gt; Is this defined? If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; then the product &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times r.&amp;lt;/math&amp;gt; Applying this logic to &amp;lt;math&amp;gt;\mathbf{xx}^{T},&amp;lt;/math&amp;gt; this is &amp;lt;math&amp;gt;\left(n\times1\right)\left(1\times n\right),&amp;lt;/math&amp;gt; so the resulting product &amp;#039;&amp;#039;is &amp;#039;&amp;#039;defined, and is an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt;&amp;#039;&amp;#039; matrix&amp;#039;&amp;#039; - the &amp;#039;&amp;#039;outer product&amp;#039;&amp;#039; of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; the word ’outer’ being used to distinguish from the inner product.&lt;br /&gt;
&lt;br /&gt;
How does the across and down rule work here? Suppose that: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Then: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right].&amp;lt;/math&amp;gt; Here, there is &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in row one of the ’matrix’ &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in column one of the matrix &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; so the across and down rule still works - it is just that there is only one product per row and column combination. So: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{cc}&lt;br /&gt;
36 &amp;amp; 18\\&lt;br /&gt;
18 &amp;amp; 9&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and it is obvious from this that &amp;lt;math&amp;gt;\mathbf{xx}^{T}&amp;lt;/math&amp;gt; is a symmetric matrix.&lt;br /&gt;
&lt;br /&gt;
One can see that this outer product need not be restricted to vectors of the same dimension. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times1,&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{xy}^{T}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
y_{1} &amp;amp; \ldots &amp;amp; y_{m}\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
x_{1}y_{1} &amp;amp; x_{1}y_{2} &amp;amp; \ldots &amp;amp; x_{1}y_{m}\\&lt;br /&gt;
x_{2}y_{1} &amp;amp; x_{2}y_{2} &amp;amp; \ldots &amp;amp; x_{2}y_{m}\\&lt;br /&gt;
\\&lt;br /&gt;
x_{n}y_{1} &amp;amp; x_{n}y_{2} &amp;amp; \ldots &amp;amp; x_{n}y_{m}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;\mathbf{xy}^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and consists of rows which are &amp;lt;math&amp;gt;\mathbf{y}^{T}&amp;lt;/math&amp;gt; multiplied by an element of the &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Another interesting and useful example involves a vector with every element equal to &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Sometimes this is written as &amp;lt;math&amp;gt;\mathbf{1}_{n}&amp;lt;/math&amp;gt; to indicate an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, and is called the &amp;#039;&amp;#039;sum vector&amp;#039;&amp;#039;. Why? Consider the impact of &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; on the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; used above: &amp;lt;math&amp;gt;\mathbf{1}_{2}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=9,&amp;lt;/math&amp;gt; i.e. an inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with the sum vector is the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; Dividing through by the number of elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; produces the average of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; - i.e. the ’sample mean’ of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The outer product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; is also interesting:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{1}_{2}\mathbf{x}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
6 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x1}_{2}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 6\\&lt;br /&gt;
3 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
showing that pre multiplication of an &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as rows of the product, whilst post multiplication of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}^{T}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as the columns of the product.&lt;br /&gt;
&lt;br /&gt;
Finally: &amp;lt;math&amp;gt;\mathbf{1}_{n}\mathbf{1}_{n}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix with every element equal to &amp;lt;math&amp;gt;1.&amp;lt;/math&amp;gt; This type of matrix also appears in econometrics!&lt;br /&gt;
&lt;br /&gt;
== Triangular matrices ==&lt;br /&gt;
&lt;br /&gt;
A square &amp;#039;&amp;#039;lower triangular &amp;#039;&amp;#039;matrix has all elements above the main diagonal equal to zero, whilst a square &amp;#039;&amp;#039;upper triangular &amp;#039;&amp;#039;matrix has all elements below the main diagonal equal to zero. A simple example of a lower triangular matrix is: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; 0\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Clearly, for this matrix, &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an upper triangular matrix.&lt;br /&gt;
&lt;br /&gt;
One can adapt the definition to rectangular matrices: for example, if two arbitrary rows are added to &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; so that it becomes &amp;lt;math&amp;gt;5\times3,&amp;lt;/math&amp;gt; it would still be considered lower triangular. Equally, if, for example, the third column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; above is removed, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is still considered lower triangular.&lt;br /&gt;
&lt;br /&gt;
Often, we use &amp;#039;&amp;#039;unit &amp;#039;&amp;#039;triangular matrices, where the diagonal elements are all equal to &amp;lt;math&amp;gt;1:&amp;lt;/math&amp;gt; e.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 1\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right].\label{eq:lt_matrix}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Partitioned matrices ==&lt;br /&gt;
&lt;br /&gt;
Sometimes, especially with big matrices, it is useful to organise the elements of the matrix into components which are themselves matrices, for example: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; 3 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 7 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 6 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; Here it would be reasonable to write: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
B_{11} &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; B_{22}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;B_{ii},i=1,2,&amp;lt;/math&amp;gt; represent &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrices. &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is an example of a &amp;#039;&amp;#039;partitioned matrix&amp;#039;&amp;#039;: that is, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; say: &amp;lt;math&amp;gt;A=\left\Vert a_{ij}\right\Vert ,&amp;lt;/math&amp;gt; where the elements of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; are organised into &amp;#039;&amp;#039;sub-matrices&amp;#039;&amp;#039;. An example might be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right],\label{eq:partition_a}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;#039;&amp;#039;sub - matrices&amp;#039;&amp;#039; in the first row block have &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; rows, and therefore &amp;lt;math&amp;gt;m-r&amp;lt;/math&amp;gt; rows in the second row block. The column blocks might be defined by (for example) 3 columns in the first column block, 4 in the second and &amp;lt;math&amp;gt;n-7&amp;lt;/math&amp;gt; in the third column block.&lt;br /&gt;
&lt;br /&gt;
Another simple example might be: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{1} &amp;amp; A_{2} &amp;amp; A_{3}\end{array}\right],\ \ \ \ \ \mathbf{x=}\left[\begin{array}{c}&lt;br /&gt;
\mathbf{x}_{1}\\&lt;br /&gt;
\mathbf{x}_{2}\\&lt;br /&gt;
\mathbf{x}_{3}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and therefore &amp;lt;math&amp;gt;A_{1},A_{2},A_{3}&amp;lt;/math&amp;gt; have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows, &amp;lt;math&amp;gt;A_{1}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{2}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{3}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; columns. The &amp;#039;&amp;#039;subvectors&amp;#039;&amp;#039; in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n_{1},n_{2}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; rows respectively, for the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; to exist.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;n_{1}+n_{2}+n_{3}=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\sum_{i=1}^{n}a_{ij}x_{j}&amp;lt;/math&amp;gt; but the summation can be broken up into the first &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=1}^{n_{1}}a_{ij}x_{j},&amp;lt;/math&amp;gt; the next &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=n_{1}+1}^{n_{1}+n_{2}}a_{ij}x_{j},&amp;lt;/math&amp;gt; and the next &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; terms; &amp;lt;math&amp;gt;\sum_{i=n_{1}+n_{2}+1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The point about the use of partitioned matrices is that the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; can be represented as: &amp;lt;math&amp;gt;A\mathbf{x}=A_{1}\mathbf{x}_{1}+A_{2}\mathbf{x}_{2}+A\mathbf{x}_{3}&amp;lt;/math&amp;gt; by applying the across and down rule to the submatrices and the subvectors, a much simpler representation than the use of summations.&lt;br /&gt;
&lt;br /&gt;
Each of the components is a conformable matrix-vector product: this is essential in any use of partitioned matrices to represent some matrix product. For example, using &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; from equation (8) and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;B=\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is easy to write:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
A_{11}B_{11}+A_{12}B_{21}+A_{13}B_{31}\\&lt;br /&gt;
A_{21}B_{11}+A_{22}B_{21}+A_{23}B_{31}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
But, what are the row dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt; What are the possible column dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Matrices, vectors and econometrics =&lt;br /&gt;
&lt;br /&gt;
The data on weights and heights for 12 students in the data matrix: &amp;lt;math&amp;gt;D=\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; would seem to be ideally suited for fitting a two variable regression model: &amp;lt;math&amp;gt;y_{i}=\alpha+\beta x_{i}+u_{i},\;\;\;\;\; i=1,...,12.&amp;lt;/math&amp;gt; Here, the first column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the weight data, the data on the dependent variable &amp;lt;math&amp;gt;y_{i},&amp;lt;/math&amp;gt; and so should be labelled &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; The second column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the data on the explanatory variable height, in the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; say, so that: &amp;lt;math&amp;gt;D=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{y} &amp;amp; \mathbf{x}\end{array}\right].&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we define a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector with every element &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}_{12}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{u}&amp;lt;/math&amp;gt; to contain the error terms; &amp;lt;math&amp;gt;\mathbf{u}=\left[\begin{array}{c}&lt;br /&gt;
u_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
u_{12}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; the regression model can be written in terms of the three data vectors &amp;lt;math&amp;gt;\mathbf{y,1}_{12}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\mathbf{y}=\mathbf{1}_{12}\alpha+\mathbf{x}\beta+\mathbf{u.}&amp;lt;/math&amp;gt; To see this, think of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th elements of the vectors on the left and right hand sides.&lt;br /&gt;
&lt;br /&gt;
The standard next step is then to combine the data vectors for the explanatory variables into a matrix: &amp;lt;math&amp;gt;X=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{1}_{12} &amp;amp; \mathbf{x}\end{array}\right],&amp;lt;/math&amp;gt; and then define a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\boldsymbol{\delta}&amp;lt;/math&amp;gt; to contain the parameters &amp;lt;math&amp;gt;\alpha,\beta&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\boldsymbol{\delta}=\left[\begin{array}{r}&lt;br /&gt;
\alpha\\&lt;br /&gt;
\beta&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; to give the data matrix representation of the regression model as: &amp;lt;math&amp;gt;\mathbf{y}=X\boldsymbol{\delta}+\mathbf{u.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the purposes of developing the theory of regression, this is the most convenient form of the regression model. It can represent regression models with any number of explanatory variables, and thus any number of parameters. The obvious point is that a knowledge of vector and matrix operations is needed to use and understand this form.&lt;br /&gt;
&lt;br /&gt;
We shall see later that there are two particular matrix and vector quantities associated with a regression model. The first is the matrix &amp;lt;math&amp;gt;X^{T}X,&amp;lt;/math&amp;gt; and the second the vector &amp;lt;math&amp;gt;X^{T}\mathbf{y.}&amp;lt;/math&amp;gt; The following Matlab code snippet provides the numerical values of these quantities for the weight data:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; dset = load(’weights.mat’); &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xtx = dset.X’ * dset.X; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xty = dset.X’ * dset.y; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xtx) &lt;br /&gt;
&lt;br /&gt;
 12     802&lt;br /&gt;
&lt;br /&gt;
802   53792&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xty)&lt;br /&gt;
&lt;br /&gt;
  1850&lt;br /&gt;
&lt;br /&gt;
124528&lt;br /&gt;
&lt;br /&gt;
Hand calculation is of course possible, but not recommended.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Lnotes&amp;diff=3042</id>
		<title>Lnotes</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Lnotes&amp;diff=3042"/>
				<updated>2013-09-10T14:54:50Z</updated>
		
		<summary type="html">&lt;p&gt;LG: Created page with &amp;quot;= Matrices =  In the PreSession Maths course, a matrix was defined as follows:  &amp;lt;blockquote&amp;gt;A matrix is a rectangular array of numbers enclosed in parentheses, con-  ventional...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Matrices =&lt;br /&gt;
&lt;br /&gt;
In the PreSession Maths course, a matrix was defined as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;A matrix is a rectangular array of numbers enclosed in parentheses, con-&lt;br /&gt;
&lt;br /&gt;
ventionally denoted by a capital letter. The number of rows (say &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt;) and&lt;br /&gt;
&lt;br /&gt;
the number of columns (say &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;) determine the order of the matrix (&amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\times&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;).&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
Two examples were given:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
P &amp;amp; =\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 3 &amp;amp; 4\\&lt;br /&gt;
3 &amp;amp; 1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ Q=\left[\begin{array}{rr}&lt;br /&gt;
2 &amp;amp; 3\\&lt;br /&gt;
4 &amp;amp; 3\\&lt;br /&gt;
1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
matrices of dimensions &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;3\times2&amp;lt;/math&amp;gt; respectively.&lt;br /&gt;
&lt;br /&gt;
Why study matrices for econometrics? Basically because a data set of several variables, e.g. on the weights and heights of 12 students, can be thought of as a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
D &amp;amp; =\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The properties of matrices can then be used to facilitate answering all the usual questions of econometrics - list not given here!&lt;br /&gt;
&lt;br /&gt;
Calculations with matrices with explicit numerical elements, as in the examples above is called matrix &amp;#039;&amp;#039;arithmetic&amp;#039;&amp;#039;. Matrix &amp;#039;&amp;#039;algebra&amp;#039;&amp;#039; is the algebra of matrices where the elements are not made explicit: this is what is really required for econometrics, as we shall see.&lt;br /&gt;
&lt;br /&gt;
As an example of this, a &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix might be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{ccc}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and would equal &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; above if the collection of &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; were given appropriate numerical values.&lt;br /&gt;
&lt;br /&gt;
A general &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is also a &amp;#039;&amp;#039;typical element &amp;#039;&amp;#039;notation for matrices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left\Vert a_{ij}\right\Vert ,\ \ \ \ \ i=1,...,m,j=1,...,n,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; is the element at the intersection of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row and &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th column in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;m\neq n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;#039;&amp;#039;rectangular &amp;#039;&amp;#039;matrix; when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a square matrix, having the same number of rows or columns.&lt;br /&gt;
&lt;br /&gt;
== Rows, columns and vectors ==&lt;br /&gt;
&lt;br /&gt;
Clearly, there is no reason why &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; cannot equal 1: so, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix with &amp;lt;math&amp;gt;n=1,&amp;lt;/math&amp;gt; i.e. with one column, is usually called a column vector. Similarly, a matrix with one row is a row vector.&lt;br /&gt;
&lt;br /&gt;
There are a lot of advantages to thinking of matrices as collections of row or column vectors, as we shall see. As an example, define the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; column vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\mathbf{,\ \ \ b}=\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and arrange as the columns of the &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\label{eq:axy}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, a column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; elements can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What happens when both &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; are equal to &amp;lt;math&amp;gt;1?&amp;lt;/math&amp;gt; Then, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, but it is also considered to be a real number, or &amp;#039;&amp;#039;scalar&amp;#039;&amp;#039; in the language of linear algebra:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[a_{11}\right]=a_{11}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is perhaps a little odd, but turns out to be a useful convention in a number of situations.&lt;br /&gt;
&lt;br /&gt;
== Transposition of vectors ==&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; in equation ([eq:axy]) can be seen as elements of column vectors, say:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],\ \ \ \boldsymbol{d}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This representation of row vectors as column vectors is a bit clumsy, so some transformation which converts a column vector into a row vector, and vice versa would be useful. The process of converting a column vector &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; into a row vector is called &amp;#039;&amp;#039;transposition, &amp;#039;&amp;#039;and the transposed version of &amp;lt;math&amp;gt;\mathbf{c}&amp;lt;/math&amp;gt; is denoted:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c}^{T} &amp;amp; =\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; superscript denoting transposition. In practice, a prime, &amp;lt;math&amp;gt;^{\prime},&amp;lt;/math&amp;gt; is used instead of &amp;lt;math&amp;gt;^{T}.&amp;lt;/math&amp;gt; However, whilst the prime is much simpler to write than the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; sign, it is also much easier to lose track of in writing out long or complicated expressions. So, it is best initially to use &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; to denote transposition rather than the prime &amp;lt;math&amp;gt;^{\prime}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can then be written via its rows as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
\mathbf{c}^{T}\\&lt;br /&gt;
\boldsymbol{d}^{T}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The same ideas can be applied to the matrices &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Q.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Operations with matrices =&lt;br /&gt;
&lt;br /&gt;
== Addition, subtraction and scalar multiplication ==&lt;br /&gt;
&lt;br /&gt;
For vectors, addition and subtraction are defined only for vectors of the same dimensions. If:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
y_{n}&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x+y} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}+y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}+y_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{x-y}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}-y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}-y_{n}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clearly, the addition or subtraction operation is &amp;#039;&amp;#039;elementwise. &amp;#039;&amp;#039;If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; have different dimensions, there will be some elements left over once all the elements of the smaller dimensioned vector have been used up.&lt;br /&gt;
&lt;br /&gt;
Another operation is &amp;#039;&amp;#039;scalar multiplication&amp;#039;&amp;#039;: if &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; is a real number or scalar, the product &amp;lt;math&amp;gt;\lambda\mathbf{x}&amp;lt;/math&amp;gt; is defined as: &amp;lt;math&amp;gt;\lambda\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that every element of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is multiplied by the same scalar &amp;lt;math&amp;gt;\lambda.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The two types of operation can be combined into the &amp;#039;&amp;#039;linear combination&amp;#039;&amp;#039; of vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right]+\left[\begin{array}{c}&lt;br /&gt;
\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mu y_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}+\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}+\mu y_{n}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equally, one can define the linear combination of vectors &amp;lt;math&amp;gt;\mathbf{x,y,}\ldots,\mathbf{z}&amp;lt;/math&amp;gt; by scalars &amp;lt;math&amp;gt;\lambda,\mu,\ldots,\nu&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}+\ldots+\nu\mathbf{z}&amp;lt;/math&amp;gt; with typical element: &amp;lt;math&amp;gt;\lambda x_{i}+\mu y_{i}+\ldots+\nu z_{i},&amp;lt;/math&amp;gt; provided that all the vectors have the same dimension.&lt;br /&gt;
&lt;br /&gt;
For matrices, these ideas carry over immediately: apply to each column of the matrices involved. For example, if &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{b}_{n}\end{array}\right],&amp;lt;/math&amp;gt; both &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; then addition and subtraction are defined elementwise, as for vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A+B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}+\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}+\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}+b_{ij}\right\Vert ,\\&lt;br /&gt;
A-B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}-\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}-\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}-b_{ij}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scalar multiplication of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; involves multiplying every column vector of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda,&amp;lt;/math&amp;gt; and therefore multiplying every element of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda A=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}\right\Vert .&amp;lt;/math&amp;gt; With the same idea for &amp;lt;math&amp;gt;B,&amp;lt;/math&amp;gt; the linear combination of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mu&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\lambda A+\mu B=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1}+\mu\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}+\mu\mathbf{b}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}+\mu b_{ij}\right\Vert .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, consider the matrices: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\lambda=1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mu=-2:&amp;lt;/math&amp;gt; then:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\lambda A+\mu B &amp;amp; = &amp;amp; A-2B\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
4 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; 7&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - vector products ==&lt;br /&gt;
&lt;br /&gt;
=== Inner product ===&lt;br /&gt;
&lt;br /&gt;
The simplest form of a matrix vector product is the case where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; consists of one row, so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;1\times n&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\mathbf{a}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right].&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the product &amp;lt;math&amp;gt;A\mathbf{x}=\mathbf{a}^{T}\mathbf{x}&amp;lt;/math&amp;gt; is called the &amp;#039;&amp;#039;inner product&amp;#039;&amp;#039; and is defined as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a}^{T}\mathbf{x} &amp;amp; =a_{1}x_{1}+\ldots+a_{n}x_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that the definition amounts to multiplying corresponding elements in &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and adding up the resultant products. Writing: &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x=}\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=a_{1}x_{1}+\ldots+a_{n}x_{n}&amp;lt;/math&amp;gt; motivates the familiar description of the &amp;#039;&amp;#039;across and down rule &amp;#039;&amp;#039;for this product: &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; is the ’multiply corresponding elements’ part of the definition.&lt;br /&gt;
&lt;br /&gt;
Notice that the result of the inner product is a real number, for example: &amp;lt;math&amp;gt;\mathbf{c}^{T}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{c}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=36+6=42.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, in the product &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x,}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have the same number of elements, &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; say, for the product to be defined. If &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; had different numbers of elements, there would be some elements of &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; left over or not used in the product: e.g.: &amp;lt;math&amp;gt;\mathbf{b}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{x=}\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; When the inner product of two vectors is defined, the vectors are said to be &amp;#039;&amp;#039;conformable&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Orthogonality ==&lt;br /&gt;
&lt;br /&gt;
Two vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; with the property that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0&amp;lt;/math&amp;gt; are said to be orthogonal to each other. For example, if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
-1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is clear that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0.&amp;lt;/math&amp;gt; This seems a rather innocuous definition, and yet the idea of orthogonality turns out to be extremely important in econometrics.&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; are thought of as points in &amp;lt;math&amp;gt;R^{2},&amp;lt;/math&amp;gt; and arrows are drawn from the origin to &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and to &amp;lt;math&amp;gt;\mathbf{y,}&amp;lt;/math&amp;gt; then the two arrows are perpendicular to each other - see Figure [orthy&amp;lt;sub&amp;gt;e&amp;lt;/sub&amp;gt;xample]. If &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; were defined as: &amp;lt;math&amp;gt;\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the position of the &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; vector and the corresponding arrow would change, but the perpendicularity property would still hold.&lt;br /&gt;
&lt;br /&gt;
[ht] [[Image:0C__courses_econometric_methods_yed_orthy_example.pdf|fig:]]&lt;br /&gt;
&lt;br /&gt;
[orthy&amp;lt;sub&amp;gt;e&amp;lt;/sub&amp;gt;xample]&lt;br /&gt;
&lt;br /&gt;
=== Matrix - vector products ===&lt;br /&gt;
&lt;br /&gt;
Since the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; has two rows, now denoted &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{1}^{T}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{2}^{T},&amp;lt;/math&amp;gt; there are two possible inner products with the vector:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]:\\&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x} &amp;amp; = &amp;amp; 42,\ \ \ \ \ \boldsymbol{\alpha}_{2}^{T}\mathbf{x}=33.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assembling the two inner product values into a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector defines the product of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; with the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x}\\&lt;br /&gt;
\boldsymbol{\alpha}_{2}^{T}\mathbf{x}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Focussing only on the part: &amp;lt;math&amp;gt;\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; one can see that each element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is obtained from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; argument.&lt;br /&gt;
&lt;br /&gt;
Sometimes this product is described as forming a &amp;#039;&amp;#039;linear combination &amp;#039;&amp;#039;of the columns of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; using the scalar elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=6\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]+3\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; More generally, if:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right],\ \ \ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
\lambda\\&lt;br /&gt;
\mu&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
A\mathbf{x} &amp;amp; = &amp;amp; \lambda\mathbf{a}+\mu\mathbf{b.}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The general version of these ideas for an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \mathbf{a}_{2} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right].&amp;lt;/math&amp;gt; is straightforward. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, then the vector &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is, by the &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
a_{11}x_{1}+\ldots+a_{1n}x_{n}\\&lt;br /&gt;
a_{21}x_{1}+\ldots+a_{2n}x_{n}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{m1}x_{1}+\ldots+a_{mn}x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{1j}x_{j}\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{2j}x_{j}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{mj}x_{j}&lt;br /&gt;
\end{array}\right],\label{eq:ab}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that the typical element, the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th, is &amp;lt;math&amp;gt;\sum\limits _{j=1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt; Equally, &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is the linear combination &amp;lt;math&amp;gt;\mathbf{a}_{1}x_{1}+\ldots+\mathbf{a}_{n}x_{n}&amp;lt;/math&amp;gt; of the columns of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - matrix products ==&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{a}_{1},\ldots,\mathbf{a}_{n},&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{b}_{1},\ldots,\mathbf{b}_{r}.&amp;lt;/math&amp;gt; Clearly, each product &amp;lt;math&amp;gt;A\mathbf{b}_{1},...,A\mathbf{b}_{r}&amp;lt;/math&amp;gt; exists, and is &amp;lt;math&amp;gt;m\times1.&amp;lt;/math&amp;gt; These products can be arranged as the columns of a matrix as &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]&amp;lt;/math&amp;gt; and this matrix is &amp;#039;&amp;#039;defined&amp;#039;&amp;#039; to be the product &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; of the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]=AB.&amp;lt;/math&amp;gt; By construction, this must be an &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix, since each column is &amp;lt;math&amp;gt;m\times1&amp;lt;/math&amp;gt; and there are &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; columns.&lt;br /&gt;
&lt;br /&gt;
This is not the usual presentation of the definition of the product of two matrices, which relies on the &amp;#039;&amp;#039;across and down rule&amp;#039;&amp;#039; mentioned earlier, and focusses on the elements of each matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt; Set:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \mathbf{b}_{2} &amp;amp; \ldots &amp;amp; \mathbf{b}_{r}\end{array}\right]\text{\,\,\,\,\,\,\,(by columns)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert b_{ik}\right\Vert ,\ \ \ \ \ i=1,...,n,k=1,...,r\text{ \,\,\,\,\,\,(typical element)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\text{\,\,\,\,\,\,\,(the array)}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What does the typical element of the &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; look like? Start with the &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; which is &amp;lt;math&amp;gt;A\mathbf{b}_{k}.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element in &amp;lt;math&amp;gt;A\mathbf{b}_{k}&amp;lt;/math&amp;gt; is, from equation ([eq:ab]), the inner product of the elements of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\end{array}\right],&amp;lt;/math&amp;gt; with the elements of &amp;lt;math&amp;gt;\mathbf{b}_{k},&amp;lt;/math&amp;gt; so that the inner product is: &amp;lt;math&amp;gt;a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, the &amp;lt;math&amp;gt;ik&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;c_{ik}=a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt; We can see this arising from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; calculation by writing:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\label{eq:c_ab}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1k} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2k} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nk} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert \sum_{j=1}^{n}a_{ij}b_{jk}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These ideas are simple, but a little tedious. Numerical examples are equally tedious! As an example, using: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; we can find the matrix &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; such that&lt;br /&gt;
&lt;br /&gt;
# the first column of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; adds together the columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the second column is the difference of the first and second columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the third column is &amp;lt;math&amp;gt;2\times&amp;lt;/math&amp;gt; the first column of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the fourth column is zero.&lt;br /&gt;
&lt;br /&gt;
It is easy to check that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cccc}&lt;br /&gt;
8 &amp;amp; 4 &amp;amp; 12 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; -2 &amp;amp; 6 &amp;amp; 0&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Arithmetic calculations of matrix products almost always use the elementwise across and down formula. However, there are many situations in econometrics where algebraic rather than arithmetic arguments are required. In these cases, the viewpoint of matrix multiplication as linear combinations of columns is much more powerful.&lt;br /&gt;
&lt;br /&gt;
Clearly one can give many more examples of different dimensions and complexities - but the same basic rules apply. To multiply two matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; together, the number of columns in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; must match the number of rows in &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; - this is &amp;#039;&amp;#039;conformability&amp;#039;&amp;#039; in action again. The resulting product will have number of rows equal to the number in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and number of columns equal to the number in &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this conformability rule does not hold, then the product of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is not defined.&lt;br /&gt;
&lt;br /&gt;
== Matlab ==&lt;br /&gt;
&lt;br /&gt;
One should also say that as the dimensions of the matrices increases, so the tediousness of the calculations increase. The solution to this for numerical calculation is to appeal to the computer. Programs like Matlab and Excel (and a number of others, some of them free) resolve this difficulty easily.&lt;br /&gt;
&lt;br /&gt;
In Matlab, symbols for row or column vectors do not need any particular differentiation: they are distinguished by how they are defined. For example, the following Matlab commands define &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;as a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; as a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector, then display the contents of these variables, and do a calculation:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec = [1 2 3 4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec = [1;2;3;4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec&lt;br /&gt;
&lt;br /&gt;
rowvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec&lt;br /&gt;
&lt;br /&gt;
colvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec*colvec&lt;br /&gt;
&lt;br /&gt;
ans =&lt;br /&gt;
&lt;br /&gt;
30 &lt;br /&gt;
&lt;br /&gt;
So, the semi-colon indicates the end of a row in a matrix or vector; it can be replaced by a carriage return. Notice the difference in how a row vector and a column vector is defined. One can see that the product &amp;lt;code&amp;gt;rowvec*colvec&amp;lt;/code&amp;gt; is well defined, just because &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Matlab also allows elementwise multiplication of two vectors using the &amp;lt;math&amp;gt;\centerdot\ast&amp;lt;/math&amp;gt; operator: if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
y_{2}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{x}\centerdot\ast\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}y_{1}\\&lt;br /&gt;
x_{2}y_{2}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and one can see that the inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; can be obtained as the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; In Matlab, this would be obtained as: &amp;lt;math&amp;gt;\text{sum}\left(\mathbf{x}\centerdot\ast\mathbf{y}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the example above, this calculation fails since &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; sum(rowvec .* colvec) ??? &lt;br /&gt;
&lt;br /&gt;
Error using ==&amp;amp;gt; times Matrix dimensions must agree. &lt;br /&gt;
&lt;br /&gt;
For this to work, &amp;lt;code&amp;gt;rowvec&amp;lt;/code&amp;gt; would have to be transposed as &amp;lt;code&amp;gt;rowvec’&amp;lt;/code&amp;gt;, so that transposition in Matlab is very natural.&lt;br /&gt;
&lt;br /&gt;
Allowing for such difficulties, matrix multiplication in Matlab is very simple:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; A = [6 2; 3 5];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; B = [1 1 2 0;1 -1 0 0];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = A * B; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
 8 4 1&lt;br /&gt;
&lt;br /&gt;
 2 0 8 &lt;br /&gt;
&lt;br /&gt;
-2 6 0 &lt;br /&gt;
&lt;br /&gt;
Notice how the matrices are defined here through their rows. The &amp;lt;code&amp;gt;disp() &amp;lt;/code&amp;gt;command displays the contents of the object referred to.&lt;br /&gt;
&lt;br /&gt;
It is less natural in Matlab to define matrices by columns - a typical example of how mathematics and computing have conflicts of notation. However, once columns &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}&amp;lt;/math&amp;gt; have been defined, the concatenation operation &amp;lt;math&amp;gt;\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]&amp;lt;/math&amp;gt; collects the columns into a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; a = [6;2]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; b = [3;5]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = [a b]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
6 3 &lt;br /&gt;
&lt;br /&gt;
2 5 &lt;br /&gt;
&lt;br /&gt;
Notice that the &amp;lt;code&amp;gt;disp(C)&amp;lt;/code&amp;gt; command does not label the result that is printed out. Simply typing &amp;lt;code&amp;gt;C&amp;lt;/code&amp;gt; would preface the output by &amp;lt;code&amp;gt;C =&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Pre and Post Multiplication ==&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; as above, say that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;pre-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; and that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;post-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This distinction between &amp;#039;&amp;#039;pre &amp;#039;&amp;#039;and &amp;#039;&amp;#039;post &amp;#039;&amp;#039;multiplication is important, in the following sense. Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are matrices such that the products &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined. If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; rows for &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; to be defined. For &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; to be defined, &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; columns to match the &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when both products are defined, there is no reason for the two products coincide. The first thing to notice is that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;m\times m,&amp;lt;/math&amp;gt; matrix, whilst &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; matrix. Different sized matrices cannot be equal. To illustrate, use the matrices: &amp;lt;math&amp;gt;B_{2}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right],\ \ \ C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]:&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B_{2}C &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrr}&lt;br /&gt;
27 &amp;amp; -3 &amp;amp; -15\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-15 &amp;amp; -1 &amp;amp; 8&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
CB_{2} &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
49 &amp;amp; -11\\&lt;br /&gt;
31 &amp;amp; 15&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; matrices, the products can differ: for example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
8 &amp;amp; 4\\&lt;br /&gt;
8 &amp;amp; -2&lt;br /&gt;
\end{array}\right],\ \ \ \ \ BA=\left[\begin{array}{cc}&lt;br /&gt;
9 &amp;amp; 7\\&lt;br /&gt;
3 &amp;amp; -3&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In cases where &amp;lt;math&amp;gt;AB=BA,&amp;lt;/math&amp;gt; the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are said to &amp;#039;&amp;#039;commute&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Transposition ==&lt;br /&gt;
&lt;br /&gt;
A column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; can be converted to a row vector &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by transposition: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ \mathbf{x}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
x_{1} &amp;amp; \ldots &amp;amp; x_{n}\end{array}\right].&amp;lt;/math&amp;gt; Transposing &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;\left(\mathbf{x}^{T}\right)^{T}&amp;lt;/math&amp;gt; reproduces the original vector &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; How do these ideas carry over to matrices?&lt;br /&gt;
&lt;br /&gt;
If the &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right],&amp;lt;/math&amp;gt; the transpose of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; is defined as the matrix whose &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; are &amp;lt;math&amp;gt;\mathbf{a}_{i}^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{c}&lt;br /&gt;
\mathbf{a}_{1}^{T}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mathbf{a}_{n}^{T}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; In terms of elements, if: &amp;lt;math&amp;gt;\mathbf{a}_{i}=\left[\begin{array}{c}&lt;br /&gt;
a_{1i}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{ni}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ A^{T}=\left[\begin{array}{rrrrr}&lt;br /&gt;
a_{11} &amp;amp; \ldots &amp;amp; a_{i1} &amp;amp; \ldots &amp;amp; a_{m1}\\&lt;br /&gt;
a_{12} &amp;amp; \ldots &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{m2}\\&lt;br /&gt;
\vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{1n} &amp;amp; \ldots &amp;amp; a_{in} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; One can see that the first column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; has now become the first row of &amp;lt;math&amp;gt;A^{T}.&amp;lt;/math&amp;gt; Notice too that &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times m&amp;lt;/math&amp;gt; matrix if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix.&lt;br /&gt;
&lt;br /&gt;
Transposing &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; takes the first column of &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; and writes it as a row, which coincides with the first row of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; The same argument applies to the other columns of &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\left(A^{T}\right)^{T}=A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== The product rule for transposition ===&lt;br /&gt;
&lt;br /&gt;
This states that if &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;C^{T}=B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How to see this? Consider the following example: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; b_{13} &amp;amp; b_{14}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; b_{23} &amp;amp; b_{24}\\&lt;br /&gt;
b_{31} &amp;amp; b_{32} &amp;amp; b_{33} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;c_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=\sum_{k=1}^{3}a_{2k}b_{k3}.\label{eq:c23}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that: &amp;lt;math&amp;gt;B^{T}A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
b_{11} &amp;amp; b_{21} &amp;amp; b_{31}\\&lt;br /&gt;
b_{12} &amp;amp; b_{22} &amp;amp; b_{32}\\&lt;br /&gt;
b_{13} &amp;amp; b_{23} &amp;amp; b_{33}\\&lt;br /&gt;
b_{14} &amp;amp; b_{24} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
a_{11} &amp;amp; a_{21}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that the &amp;lt;math&amp;gt;\left(3,2\right)&amp;lt;/math&amp;gt; element of this product is actually &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;b_{13}a_{21}+b_{23}a_{22}+b_{33}a_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=c_{23}.&amp;lt;/math&amp;gt; In summation notation, we see that from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;c_{23}=\sum_{k=1}^{3}b_{k3}a_{2k},&amp;lt;/math&amp;gt; where the position of the index of summation is due to the transposition. So, in summation notation, the calculation of &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; equals that from equation ([eq:c23]).&lt;br /&gt;
&lt;br /&gt;
More generally, the &amp;lt;math&amp;gt;\left(i,j\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\sum_{k=1}^{3}a_{ik}b_{kj}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;\left(j,i\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt; But this means that &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; must be the transpose of &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; since the elements in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; are being written in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This &amp;#039;&amp;#039;Product Rule for Transposition&amp;#039;&amp;#039; can be applied again to find the transpose &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;C^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}=\left(B^{T}A^{T}\right)^{T}=\left(A^{T}\right)^{T}\left(B^{T}\right)^{T}=AB=C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Special Types of Matrix =&lt;br /&gt;
&lt;br /&gt;
== The zero matrix ==&lt;br /&gt;
&lt;br /&gt;
The most obvious special type of matrix is one whose elements are all zeros. In typical element notation, the zero matrix is: &amp;lt;math&amp;gt;0=\left\Vert 0\right\Vert .&amp;lt;/math&amp;gt; Since there is no indexing on the elements, it is not obvious what the dimension of this matrix is, Sometimes one writes &amp;lt;math&amp;gt;0_{mn}&amp;lt;/math&amp;gt; to indicate a zero matrix of dimension &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The same ideas apply to vectors whose elements are all zero.&lt;br /&gt;
&lt;br /&gt;
The effect of the zero matrix in any product that is defined is simple: &amp;lt;math&amp;gt;0A=0,\ \ \ \ \ B0=0.&amp;lt;/math&amp;gt; This is easy to check using the across and down rule.&lt;br /&gt;
&lt;br /&gt;
== The identity or unit matrix ==&lt;br /&gt;
&lt;br /&gt;
Vectors of the form:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }2\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }3\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ldots,\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }n\ \text{dimensions}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
are called coordinate vectors. They are often given a characteristic notation, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; dimensions. When arranged as columns of a matrix in the natural order, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; a matrix with a characteristic pattern elements emerges, with a special notation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{2}\\&lt;br /&gt;
\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \mathbf{e}_{3}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{3}\\&lt;br /&gt;
\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \ldots &amp;amp; \mathbf{e}_{n}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;diagonal&amp;#039;&amp;#039; of this matrix is where the 1 elements are located, and every other element is zero.&lt;br /&gt;
&lt;br /&gt;
Consider the effect of &amp;lt;math&amp;gt;I_{2}&amp;lt;/math&amp;gt; on the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; by both pre and post multiplication:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
I_{2}A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\\&lt;br /&gt;
AI_{2} &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
as is easily checked by the across and down rule.&lt;br /&gt;
&lt;br /&gt;
Because any matrix is left unchanged by pre or post multiplication by an appropriately dimensioned &amp;lt;math&amp;gt;I_{n},&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is called an &amp;#039;&amp;#039;identity matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Sometimes it is called a &amp;#039;&amp;#039;unit matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Notice that &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is necessarily a square matrix.&lt;br /&gt;
&lt;br /&gt;
== Diagonal matrices ==&lt;br /&gt;
&lt;br /&gt;
The identity matrix is an example of a diagonal matrix, a matrix whose elements are all zero except for those on the diagonal. Usually diagonal matrices are taken to be square, for example: &amp;lt;math&amp;gt;D=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; They also produce characteristic effects when pre or post multiplying another matrix.&lt;br /&gt;
&lt;br /&gt;
Consider the diagonal matrix: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and the products &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; as defined in the previous section:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; -4\\&lt;br /&gt;
6 &amp;amp; -10&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
BA &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; 4\\&lt;br /&gt;
-6 &amp;amp; -10&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Comparing the results, we can deduce that post multiplication by a diagonal matrix multiplies each column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by the corresponding diagonal element, whereas pre multiplication multiplies each row by the corresponding diagonal element.&lt;br /&gt;
&lt;br /&gt;
== Symmetric matrices ==&lt;br /&gt;
&lt;br /&gt;
Symmetric matrices are matrices having the property that &amp;lt;math&amp;gt;A=A^{T}.&amp;lt;/math&amp;gt; Notice that such matrices must be square, since if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and to have equality of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; they must have the same dimension, so that &amp;lt;math&amp;gt;m=n&amp;lt;/math&amp;gt; is required.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; symmetric matrix, with typical element &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{21} &amp;amp; a_{31}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22} &amp;amp; a_{32}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equality of matrices is defined as equality of all elements. This is fine on the diagonals, since &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; have the same diagonal elements. For the off diagonal elements, we end up with the requirements: &amp;lt;math&amp;gt;a_{12}=a_{21},\ \ \ a_{13}=a_{31},\ \ \ a_{23}=a_{32}&amp;lt;/math&amp;gt; or more generally: &amp;lt;math&amp;gt;a_{ij}=a_{ji}\ \ \ \ \ \text{for}\ i\neq j.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The effect of this conclusion is that in a symmetric matrix, the ’triangle’ of above diagonal elements coincides with the triangle of below diagonal elements. It is as if the upper triangle is folded over the diagonal to become the lower triangle.&lt;br /&gt;
&lt;br /&gt;
A simple example is: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 2\\&lt;br /&gt;
2 &amp;amp; 1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; A more complicated example uses the &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and calculates the &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C^{T}C &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
45 &amp;amp; 27 &amp;amp; -21\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-21 &amp;amp; -11 &amp;amp; 10&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is clearly symmetric.&lt;br /&gt;
&lt;br /&gt;
This illustrates the general proposition that if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix, the product &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is a symmetric &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix. Proof? Compute the transpose of &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; using the product rule for transposition: &amp;lt;math&amp;gt;\left(A^{T}A\right)^{T}=A^{T}\left(A^{T}\right)^{T}=A^{T}A.&amp;lt;/math&amp;gt; Since &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is equal to its transpose, it must be a symmetric matrix. Such symmetric matrices appear frequently in econometrics.&lt;br /&gt;
&lt;br /&gt;
It should be clear that diagonal matrices are symmetric, since all their off-diagonal elements are equal (zero), and thence the identity matrix &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is also symmetric.&lt;br /&gt;
&lt;br /&gt;
== The outer product ==&lt;br /&gt;
&lt;br /&gt;
The inner product of two &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}&amp;lt;/math&amp;gt;, is automatically a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; quantity, a scalar, although it can be interpreted as a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, a matrix with a single element.&lt;br /&gt;
&lt;br /&gt;
Suppose one considered the product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{x}^{T}.&amp;lt;/math&amp;gt; Is this defined? If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; then the product &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times r.&amp;lt;/math&amp;gt; Applying this logic to &amp;lt;math&amp;gt;\mathbf{xx}^{T},&amp;lt;/math&amp;gt; this is &amp;lt;math&amp;gt;\left(n\times1\right)\left(1\times n\right),&amp;lt;/math&amp;gt; so the resulting product &amp;#039;&amp;#039;is &amp;#039;&amp;#039;defined, and is an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt;&amp;#039;&amp;#039; matrix&amp;#039;&amp;#039; - the &amp;#039;&amp;#039;outer product&amp;#039;&amp;#039; of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; the word ’outer’ being used to distinguish from the inner product.&lt;br /&gt;
&lt;br /&gt;
How does the across and down rule work here? Suppose that: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Then: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right].&amp;lt;/math&amp;gt; Here, there is &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in row one of the ’matrix’ &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in column one of the matrix &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; so the across and down rule still works - it is just that there is only one product per row and column combination. So: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{cc}&lt;br /&gt;
36 &amp;amp; 18\\&lt;br /&gt;
18 &amp;amp; 9&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and it is obvious from this that &amp;lt;math&amp;gt;\mathbf{xx}^{T}&amp;lt;/math&amp;gt; is a symmetric matrix.&lt;br /&gt;
&lt;br /&gt;
One can see that this outer product need not be restricted to vectors of the same dimension. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times1,&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{xy}^{T}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
y_{1} &amp;amp; \ldots &amp;amp; y_{m}\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
x_{1}y_{1} &amp;amp; x_{1}y_{2} &amp;amp; \ldots &amp;amp; x_{1}y_{m}\\&lt;br /&gt;
x_{2}y_{1} &amp;amp; x_{2}y_{2} &amp;amp; \ldots &amp;amp; x_{2}y_{m}\\&lt;br /&gt;
\\&lt;br /&gt;
x_{n}y_{1} &amp;amp; x_{n}y_{2} &amp;amp; \ldots &amp;amp; x_{n}y_{m}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;\mathbf{xy}^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and consists of rows which are &amp;lt;math&amp;gt;\mathbf{y}^{T}&amp;lt;/math&amp;gt; multiplied by an element of the &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Another interesting and useful example involves a vector with every element equal to &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Sometimes this is written as &amp;lt;math&amp;gt;\mathbf{1}_{n}&amp;lt;/math&amp;gt; to indicate an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, and is called the &amp;#039;&amp;#039;sum vector&amp;#039;&amp;#039;. Why? Consider the impact of &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; on the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; used above: &amp;lt;math&amp;gt;\mathbf{1}_{2}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=9,&amp;lt;/math&amp;gt; i.e. an inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with the sum vector is the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; Dividing through by the number of elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; produces the average of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; - i.e. the ’sample mean’ of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The outer product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; is also interesting:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{1}_{2}\mathbf{x}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
6 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x1}_{2}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 6\\&lt;br /&gt;
3 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
showing that pre multiplication of an &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as rows of the product, whilst post multiplication of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}^{T}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as the columns of the product.&lt;br /&gt;
&lt;br /&gt;
Finally: &amp;lt;math&amp;gt;\mathbf{1}_{n}\mathbf{1}_{n}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix with every element equal to &amp;lt;math&amp;gt;1.&amp;lt;/math&amp;gt; This type of matrix also appears in econometrics!&lt;br /&gt;
&lt;br /&gt;
== Triangular matrices ==&lt;br /&gt;
&lt;br /&gt;
A square &amp;#039;&amp;#039;lower triangular &amp;#039;&amp;#039;matrix has all elements above the main diagonal equal to zero, whilst a square &amp;#039;&amp;#039;upper triangular &amp;#039;&amp;#039;matrix has all elements below the main diagonal equal to zero. A simple example of a lower triangular matrix is: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; 0\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Clearly, for this matrix, &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an upper triangular matrix.&lt;br /&gt;
&lt;br /&gt;
One can adapt the definition to rectangular matrices: for example, if two arbitrary rows are added to &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; so that it becomes &amp;lt;math&amp;gt;5\times3,&amp;lt;/math&amp;gt; it would still be considered lower triangular. Equally, if, for example, the third column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; above is removed, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is still considered lower triangular.&lt;br /&gt;
&lt;br /&gt;
Often, we use &amp;#039;&amp;#039;unit &amp;#039;&amp;#039;triangular matrices, where the diagonal elements are all equal to &amp;lt;math&amp;gt;1:&amp;lt;/math&amp;gt; e.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 1\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right].\label{eq:lt_matrix}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Partitioned matrices ==&lt;br /&gt;
&lt;br /&gt;
Sometimes, especially with big matrices, it is useful to organise the elements of the matrix into components which are themselves matrices, for example: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; 3 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 7 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 6 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; Here it would be reasonable to write: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
B_{11} &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; B_{22}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;B_{ii},i=1,2,&amp;lt;/math&amp;gt; represent &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrices. &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is an example of a &amp;#039;&amp;#039;partitioned matrix&amp;#039;&amp;#039;: that is, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; say: &amp;lt;math&amp;gt;A=\left\Vert a_{ij}\right\Vert ,&amp;lt;/math&amp;gt; where the elements of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; are organised into &amp;#039;&amp;#039;sub-matrices&amp;#039;&amp;#039;. An example might be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right],\label{eq:partition_a}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;#039;&amp;#039;sub - matrices&amp;#039;&amp;#039; in the first row block have &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; rows, and therefore &amp;lt;math&amp;gt;m-r&amp;lt;/math&amp;gt; rows in the second row block. The column blocks might be defined by (for example) 3 columns in the first column block, 4 in the second and &amp;lt;math&amp;gt;n-7&amp;lt;/math&amp;gt; in the third column block.&lt;br /&gt;
&lt;br /&gt;
Another simple example might be: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{1} &amp;amp; A_{2} &amp;amp; A_{3}\end{array}\right],\ \ \ \ \ \mathbf{x=}\left[\begin{array}{c}&lt;br /&gt;
\mathbf{x}_{1}\\&lt;br /&gt;
\mathbf{x}_{2}\\&lt;br /&gt;
\mathbf{x}_{3}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and therefore &amp;lt;math&amp;gt;A_{1},A_{2},A_{3}&amp;lt;/math&amp;gt; have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows, &amp;lt;math&amp;gt;A_{1}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{2}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{3}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; columns. The &amp;#039;&amp;#039;subvectors&amp;#039;&amp;#039; in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n_{1},n_{2}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; rows respectively, for the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; to exist.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;n_{1}+n_{2}+n_{3}=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\sum_{i=1}^{n}a_{ij}x_{j}&amp;lt;/math&amp;gt; but the summation can be broken up into the first &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=1}^{n_{1}}a_{ij}x_{j},&amp;lt;/math&amp;gt; the next &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=n_{1}+1}^{n_{1}+n_{2}}a_{ij}x_{j},&amp;lt;/math&amp;gt; and the next &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; terms; &amp;lt;math&amp;gt;\sum_{i=n_{1}+n_{2}+1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The point about the use of partitioned matrices is that the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; can be represented as: &amp;lt;math&amp;gt;A\mathbf{x}=A_{1}\mathbf{x}_{1}+A_{2}\mathbf{x}_{2}+A\mathbf{x}_{3}&amp;lt;/math&amp;gt; by applying the across and down rule to the submatrices and the subvectors, a much simpler representation than the use of summations.&lt;br /&gt;
&lt;br /&gt;
Each of the components is a conformable matrix-vector product: this is essential in any use of partitioned matrices to represent some matrix product. For example, using &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; from equation ([eq:partition&amp;lt;sub&amp;gt;a&amp;lt;/sub&amp;gt;]) and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;B=\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is easy to write:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
A_{11}B_{11}+A_{12}B_{21}+A_{13}B_{31}\\&lt;br /&gt;
A_{21}B_{11}+A_{22}B_{21}+A_{23}B_{31}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
But, what are the row dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt; What are the possible column dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Matrices, vectors and econometrics =&lt;br /&gt;
&lt;br /&gt;
The data on weights and heights for 12 students in the data matrix: &amp;lt;math&amp;gt;D=\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; would seem to be ideally suited for fitting a two variable regression model: &amp;lt;math&amp;gt;y_{i}=\alpha+\beta x_{i}+u_{i},\;\;\;\;\; i=1,...,12.&amp;lt;/math&amp;gt; Here, the first column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the weight data, the data on the dependent variable &amp;lt;math&amp;gt;y_{i},&amp;lt;/math&amp;gt; and so should be labelled &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; The second column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the data on the explanatory variable height, in the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; say, so that: &amp;lt;math&amp;gt;D=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{y} &amp;amp; \mathbf{x}\end{array}\right].&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we define a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector with every element &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}_{12}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{u}&amp;lt;/math&amp;gt; to contain the error terms; &amp;lt;math&amp;gt;\mathbf{u}=\left[\begin{array}{c}&lt;br /&gt;
u_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
u_{12}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; the regression model can be written in terms of the three data vectors &amp;lt;math&amp;gt;\mathbf{y,1}_{12}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\mathbf{y}=\mathbf{1}_{12}\alpha+\mathbf{x}\beta+\mathbf{u.}&amp;lt;/math&amp;gt; To see this, think of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th elements of the vectors on the left and right hand sides.&lt;br /&gt;
&lt;br /&gt;
The standard next step is then to combine the data vectors for the explanatory variables into a matrix: &amp;lt;math&amp;gt;X=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{1}_{12} &amp;amp; \mathbf{x}\end{array}\right],&amp;lt;/math&amp;gt; and then define a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\boldsymbol{\delta}&amp;lt;/math&amp;gt; to contain the parameters &amp;lt;math&amp;gt;\alpha,\beta&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\boldsymbol{\delta}=\left[\begin{array}{r}&lt;br /&gt;
\alpha\\&lt;br /&gt;
\beta&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; to give the data matrix representation of the regression model as: &amp;lt;math&amp;gt;\mathbf{y}=X\boldsymbol{\delta}+\mathbf{u.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the purposes of developing the theory of regression, this is the most convenient form of the regression model. It can represent regression models with any number of explanatory variables, and thus any number of parameters. The obvious point is that a knowledge of vector and matrix operations is needed to use and understand this form.&lt;br /&gt;
&lt;br /&gt;
We shall see later that there are two particular matrix and vector quantities associated with a regression model. The first is the matrix &amp;lt;math&amp;gt;X^{T}X,&amp;lt;/math&amp;gt; and the second the vector &amp;lt;math&amp;gt;X^{T}\mathbf{y.}&amp;lt;/math&amp;gt; The following Matlab code snippet provides the numerical values of these quantities for the weight data:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; dset = load(’weights.mat’); &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xtx = dset.X’ * dset.X; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xty = dset.X’ * dset.y; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xtx) &lt;br /&gt;
&lt;br /&gt;
 12     802&lt;br /&gt;
&lt;br /&gt;
802   53792&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xty)&lt;br /&gt;
&lt;br /&gt;
  1850&lt;br /&gt;
&lt;br /&gt;
124528&lt;br /&gt;
&lt;br /&gt;
Hand calculation is of course possible, but not recommended.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3041</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3041"/>
				<updated>2013-09-10T14:52:33Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. The pdf files containing this material are &lt;br /&gt;
&lt;br /&gt;
[[Media:Lecture 2.pdf]]&lt;br /&gt;
&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
[[Media:L2_slide_ho.pdf]]&lt;br /&gt;
&lt;br /&gt;
respectively.&lt;br /&gt;
&lt;br /&gt;
The lecture notes are also available here,&lt;br /&gt;
&lt;br /&gt;
[[Lnotes|Lnotes]]&lt;br /&gt;
&lt;br /&gt;
and the corresponding exercise sheet is&lt;br /&gt;
&lt;br /&gt;
[[Exercise Sheet 2|XS2]]&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. The link to this material is&lt;br /&gt;
&lt;br /&gt;
[[Media:Xs2.pdf]]&lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material, and test their understanding using Maple TA.&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester] http://place36.placementtester.com/manchester&lt;br /&gt;
&lt;br /&gt;
Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments: there is usually a delay whilst they are loaded. You can click on the assignment you want to do - the notation follows that in the exercise sheet. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of each of the question groups. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+, -, *, /, ^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;1/x - 1&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in the Exercise Sheet.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be too hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Exercise_Sheet_2&amp;diff=3037</id>
		<title>Exercise Sheet 2</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Exercise_Sheet_2&amp;diff=3037"/>
				<updated>2013-09-10T14:41:57Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
Blackboard is the University’s virtual learning environment. All of the material for this module, including formative assessments, and summative assessments, will be available in Blackboard.&lt;br /&gt;
&lt;br /&gt;
You will normally enter the Blackboard site for this course through [https://my.manchester.ac.uk/ MyManchester] at https://my.manchester.ac.uk/ .&lt;br /&gt;
&lt;br /&gt;
All of the Blackboard material for the course is organised into Lecture topics, so that lecture notes, lecture slides, and exercise sheets can be found in the folder for each Lecture topic.&lt;br /&gt;
&lt;br /&gt;
The questions on each exercise sheet may be answered in the traditional way, on paper, and handed in to be marked. Alternatively you can answer the majority (but not all) of the exercise sheet questions online, using Maple TA. All of the questions on this exercise sheet can be answered in Maple TA. The name of the matching Maple TA assignment is indicated in each question in this exercise sheet.&lt;br /&gt;
&lt;br /&gt;
= Using Maple TA =&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. In this module, most of the questions on the paper Exercise Sheets are also available as Maple TA assignments. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
http://place36.placementtester.com/manchester&amp;lt;br /&amp;gt;and this link is also given in each Lecture folder, for convenience. Login with your registration number (first 7 digits only): the password is also your registration number. Once you have logged in, there is generally a wait whilst the assignments are loaded. On the page that follows, you can click on MyProfile and then Password Update to change your password.&lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON61001 Econometric Methods 2013-14&amp;lt;br /&amp;gt;by clicking on the entry for this course. This will bring up a page of assignments. You can click on the assignment you want to do - the notation follows that in the exercise sheets. The assignments are organised by question group, as in the Exercise Sheets, or by individual question - a component of the question group. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between “Print assignment for off-line work” or Work assignment on-line right now. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the Work ... online option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (&amp;lt;math&amp;gt; +, -, *, /, ^ &amp;lt;/math&amp;gt;) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;lt;math&amp;gt;1/x-1&amp;lt;/math&amp;gt; - is it &amp;lt;math&amp;gt;(1/x)-1&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1/(x-1)&amp;lt;/math&amp;gt;? Information about the entry of vectors and matrices in your answers is given in the next section, although the instructions are often repeated in questions.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and View Details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
Most of the Exercise Sheets have additional randomised questions associated with them in Maple TA. These are questions where Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These randomised questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be to hard, simply click on the Refresh button at the top of the page to get another question.&lt;br /&gt;
&lt;br /&gt;
= Using the equation editor in Maple TA =&lt;br /&gt;
&lt;br /&gt;
Usually it is quicker to enter your answers directly in Maple TA rather than using the Equation Editor. Using the Equation Editor is straightforward: see Figure 1. To select a symbol, click on the required panel and select the symbol required. Figure 2 shows the subscript and superscript panel, and Figure 3 shows the array selection panel. Usage is fairly self-evident - keep trying until you find the required symbol.&lt;br /&gt;
&lt;br /&gt;
Figure 1&lt;br /&gt;
&lt;br /&gt;
[[File:equation_editor001.png]]&lt;br /&gt;
&lt;br /&gt;
Figure 2&lt;br /&gt;
&lt;br /&gt;
[[File:equation_editor_subscripts.png]]&lt;br /&gt;
&lt;br /&gt;
Figure 3&lt;br /&gt;
&lt;br /&gt;
[[File:equation_editor_arrays.png]]&lt;br /&gt;
&lt;br /&gt;
= Entering vectors and matrices into Maple TA =&lt;br /&gt;
&lt;br /&gt;
Many questions on this exercise sheet require vectors, matrices or indexed expression as their answers. Some questions provide an array of the right size for the answer, so that none of the methods below are required.&lt;br /&gt;
&lt;br /&gt;
There are a variety of ways of entering these expressions into Maple TA - choose the way you feel most comfortable in using.&lt;br /&gt;
&lt;br /&gt;
# Vector entry.&lt;br /&gt;
## You can use Maple “text entry” for column vectors, e.g. for the &amp;lt;math&amp;gt;3\times1&amp;lt;/math&amp;gt; vector,: &amp;lt;math&amp;gt;\left(\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right),&amp;lt;/math&amp;gt; enter (without the quotes) Vector(&amp;lt;math&amp;gt;[1,2,3]&amp;lt;/math&amp;gt;).&lt;br /&gt;
## A quicker version is &amp;lt;math&amp;gt;&amp;lt;1,2,3&amp;gt;&amp;lt;/math&amp;gt;.&lt;br /&gt;
## For a row vector, use &amp;lt;math&amp;gt;&amp;lt;1|2|3&amp;gt;&amp;lt;/math&amp;gt;, as above, or Vector[row](&amp;lt;math&amp;gt;[1,2,3]&amp;lt;/math&amp;gt;).&lt;br /&gt;
## Or, you can right click in the “Equation Editor”, select the array object, choose the correct matrix size and enter the elements of the vector.&lt;br /&gt;
# Matrix entry.&lt;br /&gt;
## You can use Maple “text entry” for matrices, e.g. for a &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix, enter (without the quotes) Matrix(&amp;lt;math&amp;gt;[[1,2,3],[345]]&amp;lt;/math&amp;gt;) for the matrix: &amp;lt;math&amp;gt;\left(\begin{array}{ccc}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 3\\&lt;br /&gt;
4 &amp;amp; 5 &amp;amp; 6&lt;br /&gt;
\end{array}\right).&amp;lt;/math&amp;gt; For more columns, include more elements in each [.] block. For more rows, add more [.] blocks, each separated by a comma. This is “row - by - row” construction.&lt;br /&gt;
## A quicker version is &amp;lt;math&amp;gt;&amp;lt;1,2,3;4,5,6&amp;gt;&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Column by column construction is also possible using &amp;lt;math&amp;gt;&amp;lt;1,4|2,5|3,6&amp;gt;&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;&amp;lt;&amp;lt;1,4&amp;gt;|&amp;lt;2,5&amp;gt;|&amp;lt;3,6&amp;gt;&amp;gt;&amp;lt;/math&amp;gt;.&lt;br /&gt;
## A matrix with one row, say &amp;lt;math&amp;gt;\left[\begin{array}{ccc}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 3\end{array}\right]&amp;lt;/math&amp;gt; can be entered as Matrix(&amp;lt;math&amp;gt;[1,2,3]&amp;lt;/math&amp;gt;), or as &amp;lt;math&amp;gt;&amp;lt;1|2|3&amp;gt;&amp;lt;/math&amp;gt;.&lt;br /&gt;
## A matrix with one column, say &amp;lt;math&amp;gt;\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
4&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; can be entered as Matrix(&amp;lt;math&amp;gt;[[1],[4]]&amp;lt;/math&amp;gt;) or as &amp;lt;math&amp;gt;&amp;lt;1,4&amp;gt;&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Or, you can right click in the “Equation Editor”, select the array object, choose the correct matrix size and enter the elements of the product.&lt;br /&gt;
# Entering indexed elements in vectors and matrices - the methods below can be used with any of the vector or matrix entry schemes.&lt;br /&gt;
## To insert &amp;lt;math&amp;gt;x_{1}&amp;lt;/math&amp;gt; into an answer, you can use &amp;lt;math&amp;gt;x[1]&amp;lt;/math&amp;gt;, or you can use the Equation Editor. NB: select the right palette in the equation editor &amp;#039;&amp;#039;&amp;#039;before&amp;#039;&amp;#039;&amp;#039; entering any symbols!&lt;br /&gt;
## To insert &amp;lt;math&amp;gt;x_{11},&amp;lt;/math&amp;gt; use x[1,1], or use the Equation Editor, subject to the previous warning.&lt;br /&gt;
&lt;br /&gt;
On the whole, the entry methods using &amp;lt;math&amp;gt;&amp;lt;...&amp;gt;&amp;lt;/math&amp;gt; are the quickest to use.&lt;br /&gt;
&lt;br /&gt;
Arithmetic operators are best shown explicitly in entering answers. These operators are the standard ones, &amp;lt;math&amp;gt;+,-,*,/&amp;lt;/math&amp;gt; and .&lt;br /&gt;
&lt;br /&gt;
= Question 1: Vectors =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q1 Part 1) Given the vectors: &amp;lt;math&amp;gt;\mathbf{0}=\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{z}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find: &amp;lt;math&amp;gt;\mathbf{0}-\mathbf{x,x}+\mathbf{y,\mathbf{x-y},x-}2\mathbf{z,}3\mathbf{z}-2\mathbf{y,x}-2\mathbf{y}+\mathbf{z.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q1 Part 2) Can you find a linear combination &amp;lt;math&amp;gt;\alpha\mathbf{x}+\beta\mathbf{y}&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;\mathbf{x=}\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y=}\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; to generate (in turn) the vectors: &amp;lt;math&amp;gt;\mathbf{w}_{1}=\left[\begin{array}{c}&lt;br /&gt;
4\\&lt;br /&gt;
-2&lt;br /&gt;
\end{array}\right],\mathbf{w}_{2}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
4&lt;br /&gt;
\end{array}\right],\mathbf{w}_{3}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right]?&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 1 Part 1 and Part 2.&lt;br /&gt;
&lt;br /&gt;
= Question 2: 3 - d vectors =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q2 Part 2) For the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;-2\mathbf{x,\ }3\mathbf{x}+2\mathbf{y,\ }4\mathbf{x}-2\mathbf{y.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q2 Part 2) For the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
y_{n}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;10\mathbf{x}-\alpha\mathbf{y,}\beta\mathbf{y.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q2 Part 3) Use the vector &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and the vector &amp;lt;math&amp;gt;\mathbf{z}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; to find &amp;lt;math&amp;gt;2\mathbf{x-z,}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{x}+3\mathbf{z.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 2 Part 1 and Part 2.&lt;br /&gt;
&lt;br /&gt;
= Question 3: Matrices =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q3 Part 1) Arrange the vectors: &amp;lt;math&amp;gt;\mathbf{0}=\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{z}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt; Find a suitable vector &amp;lt;math&amp;gt;\boldsymbol{\alpha}&amp;lt;/math&amp;gt; so that the product &amp;lt;math&amp;gt;A\boldsymbol{\alpha}&amp;lt;/math&amp;gt; is equal to the following linear combinations in turn: &amp;lt;math&amp;gt;\mathbf{0}-\mathbf{x,\ x}+\mathbf{y,\ \mathbf{x-y},\ 2z}-\mathbf{x,}&amp;lt;/math&amp;gt;&amp;#039;&amp;#039;&amp;#039; &amp;#039;&amp;#039;&amp;#039;&amp;lt;math&amp;gt;\mathbf{y}-2\mathbf{z,\ x}+\mathbf{y}-\mathbf{z.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q3 Part 2) Arrange the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt; Find a suitable vector &amp;lt;math&amp;gt;\boldsymbol{\alpha}&amp;lt;/math&amp;gt; so that the product &amp;lt;math&amp;gt;A\boldsymbol{\alpha}&amp;lt;/math&amp;gt; is equal to the following linear combinations in turn: &amp;lt;math&amp;gt;-2\mathbf{x,\ }3\mathbf{x}+2\mathbf{y,\ }4\mathbf{x}-2\mathbf{y.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q3 Part 3) Arrange the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{z}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q3 Part 4) Arrange the vectors &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{z}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\boldsymbol{\alpha}=\left[\begin{array}{c}&lt;br /&gt;
2\\&lt;br /&gt;
1\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A\boldsymbol{\alpha.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q3 Part 5) Arrange the vectors : &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{w}=\left[\begin{array}{c}&lt;br /&gt;
-1\\&lt;br /&gt;
-1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\boldsymbol{\alpha}=\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
-1/2\\&lt;br /&gt;
-1/2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A\boldsymbol{\alpha.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q3 Part 6) Arrange the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{w}=\left[\begin{array}{c}&lt;br /&gt;
-1\\&lt;br /&gt;
-1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{v}=\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\boldsymbol{\alpha}=\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
1/2\\&lt;br /&gt;
1/2\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A\boldsymbol{\alpha}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 3 Part 1, Part 2, Part 4, Part 5 and Part 6.&lt;br /&gt;
&lt;br /&gt;
= Question 4: Inner products =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q4 Part 1) Find the inner product of the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; What is &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y^{T}y?}&amp;lt;/math&amp;gt; Is it true that &amp;lt;math&amp;gt;\left(\mathbf{x}^{T}\mathbf{y}\right)^{2}\leq&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{x\times y^{T}y?}&amp;lt;/math&amp;gt; This is called the Cauchy-Schwartz Inequality; equality holds only when &amp;lt;math&amp;gt;\mathbf{x}=\lambda\mathbf{y,}&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;\lambda.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q4 Part 2) Find a vector &amp;lt;math&amp;gt;\mathbf{z}&amp;lt;/math&amp;gt; with the property that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{z}=0&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Draw a diagram showing &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{z:}&amp;lt;/math&amp;gt; how would you describe the relationship between &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{z?}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q4 Part 3) For: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1/3\\&lt;br /&gt;
1/3\\&lt;br /&gt;
1/3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{z}=\left[\begin{array}{c}&lt;br /&gt;
6\\&lt;br /&gt;
2\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y,\ x}^{T}\mathbf{z,\ y}^{T}\mathbf{z.}&amp;lt;/math&amp;gt; How would you describe the value of these inner products from a statistical perspective?&lt;br /&gt;
# (XS1 Q4 Part 4) If &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{10}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; what is &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{x?}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q4 Part 5) If &amp;lt;math&amp;gt;\mathbf{z}=\left[\begin{array}{c}&lt;br /&gt;
3\\&lt;br /&gt;
7\\&lt;br /&gt;
1\\&lt;br /&gt;
9&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{1}_{4}&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector with every element equal to &amp;lt;math&amp;gt;1,&amp;lt;/math&amp;gt; what is the quantity &amp;lt;math&amp;gt;c=\dfrac{1}{4}\mathbf{1}_{4}^{T}\mathbf{z?}&amp;lt;/math&amp;gt; Find the elements of the vector &amp;lt;math&amp;gt;\mathbf{z}-c\mathbf{1}_{4}.&amp;lt;/math&amp;gt; From a statistical perspective, what are the elements of this vector? Find the value of the inner product &amp;lt;math&amp;gt;\mathbf{1}_{4}^{T}\left(\mathbf{z}-c\mathbf{1}_{4}\right).&amp;lt;/math&amp;gt; What statistical information does this illustrate?&lt;br /&gt;
# (XS1 Q4 Part 6) Using &amp;lt;math&amp;gt;\mathbf{z}=\left[\begin{array}{c}&lt;br /&gt;
3\\&lt;br /&gt;
7\\&lt;br /&gt;
1\\&lt;br /&gt;
9&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{1}_{4}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;c&amp;lt;/math&amp;gt; from part (5), what is the inner product of &amp;lt;math&amp;gt;\left(\mathbf{z}-c\mathbf{1}_{4}\right)&amp;lt;/math&amp;gt; with itself? If this distance is divided by &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;4,&amp;lt;/math&amp;gt; what statistical quantity is the result?&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 4 Part 1, Part 2, Part 3, and Part 4.&lt;br /&gt;
&lt;br /&gt;
= Question 5: Across and down =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q5 Part 1) Suppose that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;r\times s&amp;lt;/math&amp;gt; matrix with typical element &amp;lt;math&amp;gt;\left\Vert b_{ij}\right\Vert ,&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{z}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;s\times1&amp;lt;/math&amp;gt; vector, with typical element &amp;lt;math&amp;gt;z_{j}.&amp;lt;/math&amp;gt; What is the second element of the product &amp;lt;math&amp;gt;B\mathbf{z?}&amp;lt;/math&amp;gt; What is the &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;th element?&lt;br /&gt;
# (XS1 Q5 Part 2) If &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 1 &amp;amp; 3\\&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 4&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}_{1}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
1\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{b}_{2}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A\mathbf{b}_{1},&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A\mathbf{b}_{2}.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q5 Part 3) If &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 1 &amp;amp; 3\\&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 4&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}_{1}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
1\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{b}_{2}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; make &amp;lt;math&amp;gt;\mathbf{b}_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}_{2}&amp;lt;/math&amp;gt; the columns of a matrix &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; and find the matrix which is equal to &amp;lt;math&amp;gt;AB.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q5 Part 4) If &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 4&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 4\\&lt;br /&gt;
6 &amp;amp; 7&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA.&amp;lt;/math&amp;gt; What property do the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; have?&lt;br /&gt;
# (XS1 Q5 Part 5) If &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}=\left[\begin{array}{rr}&lt;br /&gt;
-3 &amp;amp; 2\\&lt;br /&gt;
4 &amp;amp; 9\\&lt;br /&gt;
1 &amp;amp; -2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A_{4}A_{10}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}A_{4}.&amp;lt;/math&amp;gt; Why are &amp;lt;math&amp;gt;A_{4}A_{10}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}A_{4}&amp;lt;/math&amp;gt; not equal?&lt;br /&gt;
# (XS1 Q5 Part 6) If &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{11}=\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 4 &amp;amp; 4\\&lt;br /&gt;
3 &amp;amp; 1 &amp;amp; -1\\&lt;br /&gt;
2 &amp;amp; 0 &amp;amp; 4&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A_{4}A_{11}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{11}A_{4}.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q5 Part 7) If &amp;lt;math&amp;gt;A_{11}=\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 4 &amp;amp; 4\\&lt;br /&gt;
3 &amp;amp; 1 &amp;amp; -1\\&lt;br /&gt;
2 &amp;amp; 0 &amp;amp; 4&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{12}=\left[\begin{array}{rrr}&lt;br /&gt;
3 &amp;amp; 0 &amp;amp; -1\\&lt;br /&gt;
-2 &amp;amp; -1 &amp;amp; -1\\&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 3&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; , find &amp;lt;math&amp;gt;A_{11}A_{12}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{12}A_{11}.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q5 Part 8) If &amp;lt;math&amp;gt;A_{1}=\left[\begin{array}{rrr}&lt;br /&gt;
4 &amp;amp; 0 &amp;amp; -3\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 4\\&lt;br /&gt;
2 &amp;amp; 2 &amp;amp; -11&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{8}=\left[\begin{array}{rrr}&lt;br /&gt;
4 &amp;amp; 2 &amp;amp; 6\\&lt;br /&gt;
1 &amp;amp; 3 &amp;amp; 4\\&lt;br /&gt;
5 &amp;amp; 0 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; , find &amp;lt;math&amp;gt;2A_{8}-3A_{1}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 5 Part 1, Part 2, Part 4, and Part 8.&lt;br /&gt;
&lt;br /&gt;
= Question 6: Transposition =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;(XS1 Q6 Part 1) Find the transpose of the following matrices:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A_{1} &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
-2 &amp;amp; 3\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right];\ \ \ A_{2}=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 9\\&lt;br /&gt;
6 &amp;amp; -2 &amp;amp; 15&lt;br /&gt;
\end{array}\right];\ \ \ A_{3}=\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
2 &amp;amp; -1\\&lt;br /&gt;
0 &amp;amp; 0&lt;br /&gt;
\end{array}\right];\\&lt;br /&gt;
A_{4} &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; -1\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; -1\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 1&lt;br /&gt;
\end{array}\right];\ \ \ A_{5}=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; -1\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; -1\\&lt;br /&gt;
-1 &amp;amp; -1 &amp;amp; 1&lt;br /&gt;
\end{array}\right];\ \ \ A_{6}=\left[\begin{array}{rr}&lt;br /&gt;
4 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5\\&lt;br /&gt;
-2 &amp;amp; 3&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;(XS1 Q6 Part 1) What is the dimension of &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; for each matrix in part (1)?&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Check that &amp;lt;math&amp;gt;\left(A^{T}\right)^{T}=A&amp;lt;/math&amp;gt; for each of the matrices in part (1).&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;(XS1 Q6 Part 1) What properties do the matrices &amp;lt;math&amp;gt;A_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{5}&amp;lt;/math&amp;gt; have that are not shared by the other matrices in part (1)?&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;(XS1 Q6 Part 5) If &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}=\left[\begin{array}{rr}&lt;br /&gt;
-3 &amp;amp; 2\\&lt;br /&gt;
4 &amp;amp; 9\\&lt;br /&gt;
1 &amp;amp; -2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A_{4}^{T}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}^{T}.&amp;lt;/math&amp;gt; Confirm that &amp;lt;math&amp;gt;\left(A_{4}A_{10}\right)^{T}=A_{10}^{T}A_{4}^{T}.&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 6 Part 1, Part 4, and Part 5.&lt;br /&gt;
&lt;br /&gt;
= Question 7: Special Matrices =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q7 Part 1) If &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; confirm that &amp;lt;math&amp;gt;I_{2}A_{4}=A_{4}I_{3}.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q7 Part 2) If &amp;lt;math&amp;gt;D=\left[\begin{array}{cc}&lt;br /&gt;
3 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 5&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}=\left[\begin{array}{rr}&lt;br /&gt;
-3 &amp;amp; 2\\&lt;br /&gt;
4 &amp;amp; 9\\&lt;br /&gt;
1 &amp;amp; -2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;DA_{4}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}D.&amp;lt;/math&amp;gt; What pattern can you detect in the results?&lt;br /&gt;
# (XS1 Q7 Part 3) If &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A_{4}A_{4}^{T}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{4}^{T}A_{4}.&amp;lt;/math&amp;gt; Are these two matrices equal? What property do these two matrices possess?&lt;br /&gt;
# (XS1 Q7 Part 4) If &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;\mathbf{xx}^{T}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\mathbf{yy}^{T}&amp;lt;/math&amp;gt;. Are these symmetric? Do they equal &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{x,}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{y}^{T}\mathbf{y?}&amp;lt;/math&amp;gt; Find &amp;lt;math&amp;gt;\mathbf{xy}^{T}:&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\mathbf{xy}^{T}&amp;lt;/math&amp;gt; equal to &amp;lt;math&amp;gt;\mathbf{yx}^{T}&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\mathbf{y}^{T}\mathbf{x?}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q7 Part 5) If &amp;lt;math&amp;gt;L=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
2 &amp;amp; 1 &amp;amp; 0\\&lt;br /&gt;
3 &amp;amp; 2 &amp;amp; 1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; show that &amp;lt;math&amp;gt;L^{T}&amp;lt;/math&amp;gt; is an upper triangular matrix.&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 7 Part 3 and 4.&lt;br /&gt;
&lt;br /&gt;
= Question 8: Partitioned matrices =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q8 Part 1) Write &amp;lt;math&amp;gt;A_{2}=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 1 &amp;amp; -3\\&lt;br /&gt;
6 &amp;amp; 0 &amp;amp; 10 &amp;amp; 9\\&lt;br /&gt;
2 &amp;amp; 0 &amp;amp; 3 &amp;amp; 4&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; in the form &amp;lt;math&amp;gt;A_{2}=\left[\begin{array}{rr}&lt;br /&gt;
B_{1} &amp;amp; B_{2}\end{array}\right],&amp;lt;/math&amp;gt; where both &amp;lt;math&amp;gt;B_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B_{2}&amp;lt;/math&amp;gt; have two columns. What are &amp;lt;math&amp;gt;B_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B_{2}?&amp;lt;/math&amp;gt; Write &amp;lt;math&amp;gt;A_{9}=\left[\begin{array}{rr}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
-1 &amp;amp; 3\\&lt;br /&gt;
4 &amp;amp; -2\\&lt;br /&gt;
7 &amp;amp; 7&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;A_{9}=\left[\begin{array}{r}&lt;br /&gt;
C_{1}\\&lt;br /&gt;
C_{2}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where both &amp;lt;math&amp;gt;C_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C_{2}&amp;lt;/math&amp;gt; have 2 rows. Express the product &amp;lt;math&amp;gt;A_{2}A_{9}&amp;lt;/math&amp;gt; in terms of &amp;lt;math&amp;gt;B_{1}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;B_{2},&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;C_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C_{2}.&amp;lt;/math&amp;gt; Do the same for the product &amp;lt;math&amp;gt;A_{9}A_{2},&amp;lt;/math&amp;gt; carefully stating the dimensions of any submatrices.&lt;br /&gt;
# (XS1 Q8 Part 1) If &amp;lt;math&amp;gt;A_{2}=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 1 &amp;amp; -3\\&lt;br /&gt;
6 &amp;amp; 0 &amp;amp; 10 &amp;amp; 9\\&lt;br /&gt;
2 &amp;amp; 0 &amp;amp; 3 &amp;amp; 4&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rr}&lt;br /&gt;
B_{1} &amp;amp; B_{2}\end{array}\right],&amp;lt;/math&amp;gt; where both &amp;lt;math&amp;gt;B_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B_{2}&amp;lt;/math&amp;gt; have two columns, and &amp;lt;math&amp;gt;\mathbf{z}=\left[\begin{array}{c}&lt;br /&gt;
5\\&lt;br /&gt;
1\\&lt;br /&gt;
-3\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
\mathbf{z}_{1}\\&lt;br /&gt;
\mathbf{z}_{2}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\mathbf{z}_{i}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;2\times1,&amp;lt;/math&amp;gt; given an expression for &amp;lt;math&amp;gt;A_{2}\mathbf{z}&amp;lt;/math&amp;gt; in terms of &amp;lt;math&amp;gt;B_{1},B_{2},\mathbf{z}_{1},\mathbf{z}_{2}.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q8 Part 3) If &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
A_{11} &amp;amp; A_{12}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{z}=\left[\begin{array}{r}&lt;br /&gt;
\mathbf{z}_{1}\\&lt;br /&gt;
\mathbf{z}_{2}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;A_{11}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;2\times2,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A_{12}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;2\times1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A_{21}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;1\times2,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A_{22}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{z}_{1}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;1\times1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{z}_{2}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;2\times1,&amp;lt;/math&amp;gt; is the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; well defined? Does the product &amp;lt;math&amp;gt;A\mathbf{z}=\left[\begin{array}{cc}&lt;br /&gt;
A_{11} &amp;amp; A_{12}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
\mathbf{z}_{1}\\&lt;br /&gt;
\mathbf{z}_{2}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
A_{11}\mathbf{z}_{1}+A_{12}\mathbf{z}_{2}\\&lt;br /&gt;
A_{21}\mathbf{z}_{1}+A_{22}\mathbf{z}_{2}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; exist in this form?&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 8 Part 1.&lt;br /&gt;
&lt;br /&gt;
= Question 9: Data matrices =&lt;br /&gt;
&lt;br /&gt;
A small data set on variables &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;D=\left[\begin{array}{rr}&lt;br /&gt;
14 &amp;amp; 2\\&lt;br /&gt;
17 &amp;amp; 4\\&lt;br /&gt;
8 &amp;amp; 3\\&lt;br /&gt;
16 &amp;amp; 5\\&lt;br /&gt;
3 &amp;amp; 2&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q9 Part 1) Define a vector of observations on &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; and a matrix &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; so that the two variable regression model &amp;lt;math&amp;gt;y_{i}=\alpha+\beta x_{i}+u_{i}&amp;lt;/math&amp;gt; can be represented, using the data in &amp;lt;math&amp;gt;D,&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;\mathbf{y}=X\boldsymbol{\delta}+\mathbf{u.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q9 Part 1) Compute for your choice of &amp;lt;math&amp;gt;X,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;X^{T}X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;X^{T}\mathbf{y.}&amp;lt;/math&amp;gt; Check your answers in Matlab.&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 9 Part 1.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:Equation_editor001.png&amp;diff=3036</id>
		<title>File:Equation editor001.png</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:Equation_editor001.png&amp;diff=3036"/>
				<updated>2013-09-10T14:39:32Z</updated>
		
		<summary type="html">&lt;p&gt;LG: Maple TA screenshot of basic Equation Editor&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Maple TA screenshot of basic Equation Editor&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:Equation_editor_subscripts.png&amp;diff=3035</id>
		<title>File:Equation editor subscripts.png</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:Equation_editor_subscripts.png&amp;diff=3035"/>
				<updated>2013-09-10T14:39:09Z</updated>
		
		<summary type="html">&lt;p&gt;LG: Maple TA screenshot of Subscripts and superscripts part of Equation Editor&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Maple TA screenshot of Subscripts and superscripts part of Equation Editor&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:Equation_editor_arrays.png&amp;diff=3034</id>
		<title>File:Equation editor arrays.png</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:Equation_editor_arrays.png&amp;diff=3034"/>
				<updated>2013-09-10T14:38:22Z</updated>
		
		<summary type="html">&lt;p&gt;LG: Maple TA screenshot of array part of Equation editor&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Maple TA screenshot of array part of Equation editor&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Exercise_Sheet_2&amp;diff=3031</id>
		<title>Exercise Sheet 2</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Exercise_Sheet_2&amp;diff=3031"/>
				<updated>2013-09-10T14:31:08Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
Blackboard is the University’s virtual learning environment. All of the material for this module, including formative assessments, and summative assessments, will be available in Blackboard.&lt;br /&gt;
&lt;br /&gt;
You will normally enter the Blackboard site for this course through [https://my.manchester.ac.uk/ MyManchester] at https://my.manchester.ac.uk/ .&lt;br /&gt;
&lt;br /&gt;
All of the Blackboard material for the course is organised into Lecture topics, so that lecture notes, lecture slides, and exercise sheets can be found in the folder for each Lecture topic.&lt;br /&gt;
&lt;br /&gt;
The questions on each exercise sheet may be answered in the traditional way, on paper, and handed in to be marked. Alternatively you can answer the majority (but not all) of the exercise sheet questions online, using Maple TA. All of the questions on this exercise sheet can be answered in Maple TA. The name of the matching Maple TA assignment is indicated in each question in this exercise sheet.&lt;br /&gt;
&lt;br /&gt;
= Using Maple TA =&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. In this module, most of the questions on the paper Exercise Sheets are also available as Maple TA assignments. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
http://place36.placementtester.com/manchester&amp;lt;br /&amp;gt;and this link is also given in each Lecture folder, for convenience. Login with your registration number (first 7 digits only): the password is also your registration number. Once you have logged in, there is generally a wait whilst the assignments are loaded. On the page that follows, you can click on MyProfile and then Password Update to change your password.&lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON61001 Econometric Methods 2013-14&amp;lt;br /&amp;gt;by clicking on the entry for this course. This will bring up a page of assignments. You can click on the assignment you want to do - the notation follows that in the exercise sheets. The assignments are organised by question group, as in the Exercise Sheets, or by individual question - a component of the question group. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between “Print assignment for off-line work” or Work assignment on-line right now. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the Work ... online option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (&amp;lt;math&amp;gt; +, -, *, /, ^ &amp;lt;/math&amp;gt;) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;lt;math&amp;gt;1/x-1&amp;lt;/math&amp;gt; - is it &amp;lt;math&amp;gt;(1/x)-1&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1/(x-1)&amp;lt;/math&amp;gt;? Information about the entry of vectors and matrices in your answers is given in the next section, although the instructions are often repeated in questions.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and View Details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
Most of the Exercise Sheets have additional randomised questions associated with them in Maple TA. These are questions where Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These randomised questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be to hard, simply click on the Refresh button at the top of the page to get another question.&lt;br /&gt;
&lt;br /&gt;
= Using the equation editor in Maple TA =&lt;br /&gt;
&lt;br /&gt;
Usually it is quicker to enter your answers directly in Maple TA rather than using the Equation Editor. Using the Equation Editor is straightforward: see Figure [fig:The-Equation-Editor]. To select a symbol, click on the required panel and select the symbol required. Figure [fig:Subscripts-and-superscripts.] shows the subscript and superscript panel, and Figure [fig:Arrays.] shows the array selection panel. Usage is fairly self-evident - keep trying until you find the required symbol.&lt;br /&gt;
&lt;br /&gt;
[[Image:1C__maple_local_screenshots_equation_editor001.png|image]]&lt;br /&gt;
&lt;br /&gt;
[th]&lt;br /&gt;
&lt;br /&gt;
[[Image:2C__maple_local_screenshots_equation_editor_subscripts.png|image]]&lt;br /&gt;
&lt;br /&gt;
[th]&lt;br /&gt;
&lt;br /&gt;
[[Image:3C__maple_local_screenshots_equation_editor_arrays.png|image]]&lt;br /&gt;
&lt;br /&gt;
= Entering vectors and matrices into Maple TA =&lt;br /&gt;
&lt;br /&gt;
Many questions on this exercise sheet require vectors, matrices or indexed expression as their answers. Some questions provide an array of the right size for the answer, so that none of the methods below are required.&lt;br /&gt;
&lt;br /&gt;
There are a variety of ways of entering these expressions into Maple TA - choose the way you feel most comfortable in using.&lt;br /&gt;
&lt;br /&gt;
# Vector entry.&lt;br /&gt;
## You can use Maple “text entry” for column vectors, e.g. for the &amp;lt;math&amp;gt;3\times1&amp;lt;/math&amp;gt; vector,: &amp;lt;math&amp;gt;\left(\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right),&amp;lt;/math&amp;gt; enter (without the quotes) Vector(&amp;lt;math&amp;gt;[1,2,3]&amp;lt;/math&amp;gt;).&lt;br /&gt;
## A quicker version is &amp;lt;math&amp;gt;&amp;lt;1,2,3&amp;gt;&amp;lt;/math&amp;gt;.&lt;br /&gt;
## For a row vector, use &amp;lt;math&amp;gt;&amp;lt;1|2|3&amp;gt;&amp;lt;/math&amp;gt;, as above, or Vector[row](&amp;lt;math&amp;gt;[1,2,3]&amp;lt;/math&amp;gt;).&lt;br /&gt;
## Or, you can right click in the “Equation Editor”, select the array object, choose the correct matrix size and enter the elements of the vector.&lt;br /&gt;
# Matrix entry.&lt;br /&gt;
## You can use Maple “text entry” for matrices, e.g. for a &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix, enter (without the quotes) Matrix(&amp;lt;math&amp;gt;[[1,2,3],[345]]&amp;lt;/math&amp;gt;) for the matrix: &amp;lt;math&amp;gt;\left(\begin{array}{ccc}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 3\\&lt;br /&gt;
4 &amp;amp; 5 &amp;amp; 6&lt;br /&gt;
\end{array}\right).&amp;lt;/math&amp;gt; For more columns, include more elements in each [.] block. For more rows, add more [.] blocks, each separated by a comma. This is “row - by - row” construction.&lt;br /&gt;
## A quicker version is &amp;lt;math&amp;gt;&amp;lt;1,2,3;4,5,6&amp;gt;&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Column by column construction is also possible using &amp;lt;math&amp;gt;&amp;lt;1,4|2,5|3,6&amp;gt;&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;&amp;lt;&amp;lt;1,4&amp;gt;|&amp;lt;2,5&amp;gt;|&amp;lt;3,6&amp;gt;&amp;gt;&amp;lt;/math&amp;gt;.&lt;br /&gt;
## A matrix with one row, say &amp;lt;math&amp;gt;\left[\begin{array}{ccc}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 3\end{array}\right]&amp;lt;/math&amp;gt; can be entered as Matrix(&amp;lt;math&amp;gt;[1,2,3]&amp;lt;/math&amp;gt;), or as &amp;lt;math&amp;gt;&amp;lt;1|2|3&amp;gt;&amp;lt;/math&amp;gt;.&lt;br /&gt;
## A matrix with one column, say &amp;lt;math&amp;gt;\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
4&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; can be entered as Matrix(&amp;lt;math&amp;gt;[[1],[4]]&amp;lt;/math&amp;gt;) or as &amp;lt;math&amp;gt;&amp;lt;1,4&amp;gt;&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Or, you can right click in the “Equation Editor”, select the array object, choose the correct matrix size and enter the elements of the product.&lt;br /&gt;
# Entering indexed elements in vectors and matrices - the methods below can be used with any of the vector or matrix entry schemes.&lt;br /&gt;
## To insert &amp;lt;math&amp;gt;x_{1}&amp;lt;/math&amp;gt; into an answer, you can use &amp;lt;math&amp;gt;x[1]&amp;lt;/math&amp;gt;, or you can use the Equation Editor. NB: select the right palette in the equation editor &amp;#039;&amp;#039;&amp;#039;before&amp;#039;&amp;#039;&amp;#039; entering any symbols!&lt;br /&gt;
## To insert &amp;lt;math&amp;gt;x_{11},&amp;lt;/math&amp;gt; use x[1,1], or use the Equation Editor, subject to the previous warning.&lt;br /&gt;
&lt;br /&gt;
On the whole, the entry methods using &amp;lt;math&amp;gt;&amp;lt;...&amp;gt;&amp;lt;/math&amp;gt; are the quickest to use.&lt;br /&gt;
&lt;br /&gt;
Arithmetic operators are best shown explicitly in entering answers. These operators are the standard ones, &amp;lt;math&amp;gt;+,-,*,/&amp;lt;/math&amp;gt; and .&lt;br /&gt;
&lt;br /&gt;
= Question 1: Vectors =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q1 Part 1) Given the vectors: &amp;lt;math&amp;gt;\mathbf{0}=\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{z}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find: &amp;lt;math&amp;gt;\mathbf{0}-\mathbf{x,x}+\mathbf{y,\mathbf{x-y},x-}2\mathbf{z,}3\mathbf{z}-2\mathbf{y,x}-2\mathbf{y}+\mathbf{z.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q1 Part 2) Can you find a linear combination &amp;lt;math&amp;gt;\alpha\mathbf{x}+\beta\mathbf{y}&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;\mathbf{x=}\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y=}\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; to generate (in turn) the vectors: &amp;lt;math&amp;gt;\mathbf{w}_{1}=\left[\begin{array}{c}&lt;br /&gt;
4\\&lt;br /&gt;
-2&lt;br /&gt;
\end{array}\right],\mathbf{w}_{2}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
4&lt;br /&gt;
\end{array}\right],\mathbf{w}_{3}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right]?&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 1 Part 1 and Part 2.&lt;br /&gt;
&lt;br /&gt;
= Question 2: 3 - d vectors =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q2 Part 2) For the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;-2\mathbf{x,\ }3\mathbf{x}+2\mathbf{y,\ }4\mathbf{x}-2\mathbf{y.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q2 Part 2) For the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
y_{n}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;10\mathbf{x}-\alpha\mathbf{y,}\beta\mathbf{y.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q2 Part 3) Use the vector &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and the vector &amp;lt;math&amp;gt;\mathbf{z}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; to find &amp;lt;math&amp;gt;2\mathbf{x-z,}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{x}+3\mathbf{z.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 2 Part 1 and Part 2.&lt;br /&gt;
&lt;br /&gt;
= Question 3: Matrices =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q3 Part 1) Arrange the vectors: &amp;lt;math&amp;gt;\mathbf{0}=\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{z}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt; Find a suitable vector &amp;lt;math&amp;gt;\boldsymbol{\alpha}&amp;lt;/math&amp;gt; so that the product &amp;lt;math&amp;gt;A\boldsymbol{\alpha}&amp;lt;/math&amp;gt; is equal to the following linear combinations in turn: &amp;lt;math&amp;gt;\mathbf{0}-\mathbf{x,\ x}+\mathbf{y,\ \mathbf{x-y},\ 2z}-\mathbf{x,}&amp;lt;/math&amp;gt;&amp;#039;&amp;#039;&amp;#039; &amp;#039;&amp;#039;&amp;#039;&amp;lt;math&amp;gt;\mathbf{y}-2\mathbf{z,\ x}+\mathbf{y}-\mathbf{z.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q3 Part 2) Arrange the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt; Find a suitable vector &amp;lt;math&amp;gt;\boldsymbol{\alpha}&amp;lt;/math&amp;gt; so that the product &amp;lt;math&amp;gt;A\boldsymbol{\alpha}&amp;lt;/math&amp;gt; is equal to the following linear combinations in turn: &amp;lt;math&amp;gt;-2\mathbf{x,\ }3\mathbf{x}+2\mathbf{y,\ }4\mathbf{x}-2\mathbf{y.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q3 Part 3) Arrange the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{z}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q3 Part 4) Arrange the vectors &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{z}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\boldsymbol{\alpha}=\left[\begin{array}{c}&lt;br /&gt;
2\\&lt;br /&gt;
1\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A\boldsymbol{\alpha.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q3 Part 5) Arrange the vectors : &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{w}=\left[\begin{array}{c}&lt;br /&gt;
-1\\&lt;br /&gt;
-1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\boldsymbol{\alpha}=\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
-1/2\\&lt;br /&gt;
-1/2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A\boldsymbol{\alpha.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q3 Part 6) Arrange the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{w}=\left[\begin{array}{c}&lt;br /&gt;
-1\\&lt;br /&gt;
-1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{v}=\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\boldsymbol{\alpha}=\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
1/2\\&lt;br /&gt;
1/2\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A\boldsymbol{\alpha}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 3 Part 1, Part 2, Part 4, Part 5 and Part 6.&lt;br /&gt;
&lt;br /&gt;
= Question 4: Inner products =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q4 Part 1) Find the inner product of the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; What is &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y^{T}y?}&amp;lt;/math&amp;gt; Is it true that &amp;lt;math&amp;gt;\left(\mathbf{x}^{T}\mathbf{y}\right)^{2}\leq&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{x\times y^{T}y?}&amp;lt;/math&amp;gt; This is called the Cauchy-Schwartz Inequality; equality holds only when &amp;lt;math&amp;gt;\mathbf{x}=\lambda\mathbf{y,}&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;\lambda.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q4 Part 2) Find a vector &amp;lt;math&amp;gt;\mathbf{z}&amp;lt;/math&amp;gt; with the property that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{z}=0&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Draw a diagram showing &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{z:}&amp;lt;/math&amp;gt; how would you describe the relationship between &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{z?}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q4 Part 3) For: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1/3\\&lt;br /&gt;
1/3\\&lt;br /&gt;
1/3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{z}=\left[\begin{array}{c}&lt;br /&gt;
6\\&lt;br /&gt;
2\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y,\ x}^{T}\mathbf{z,\ y}^{T}\mathbf{z.}&amp;lt;/math&amp;gt; How would you describe the value of these inner products from a statistical perspective?&lt;br /&gt;
# (XS1 Q4 Part 4) If &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{10}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; what is &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{x?}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q4 Part 5) If &amp;lt;math&amp;gt;\mathbf{z}=\left[\begin{array}{c}&lt;br /&gt;
3\\&lt;br /&gt;
7\\&lt;br /&gt;
1\\&lt;br /&gt;
9&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{1}_{4}&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector with every element equal to &amp;lt;math&amp;gt;1,&amp;lt;/math&amp;gt; what is the quantity &amp;lt;math&amp;gt;c=\dfrac{1}{4}\mathbf{1}_{4}^{T}\mathbf{z?}&amp;lt;/math&amp;gt; Find the elements of the vector &amp;lt;math&amp;gt;\mathbf{z}-c\mathbf{1}_{4}.&amp;lt;/math&amp;gt; From a statistical perspective, what are the elements of this vector? Find the value of the inner product &amp;lt;math&amp;gt;\mathbf{1}_{4}^{T}\left(\mathbf{z}-c\mathbf{1}_{4}\right).&amp;lt;/math&amp;gt; What statistical information does this illustrate?&lt;br /&gt;
# (XS1 Q4 Part 6) Using &amp;lt;math&amp;gt;\mathbf{z}=\left[\begin{array}{c}&lt;br /&gt;
3\\&lt;br /&gt;
7\\&lt;br /&gt;
1\\&lt;br /&gt;
9&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{1}_{4}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;c&amp;lt;/math&amp;gt; from part (5), what is the inner product of &amp;lt;math&amp;gt;\left(\mathbf{z}-c\mathbf{1}_{4}\right)&amp;lt;/math&amp;gt; with itself? If this distance is divided by &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;4,&amp;lt;/math&amp;gt; what statistical quantity is the result?&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 4 Part 1, Part 2, Part 3, and Part 4.&lt;br /&gt;
&lt;br /&gt;
= Question 5: Across and down =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q5 Part 1) Suppose that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;r\times s&amp;lt;/math&amp;gt; matrix with typical element &amp;lt;math&amp;gt;\left\Vert b_{ij}\right\Vert ,&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{z}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;s\times1&amp;lt;/math&amp;gt; vector, with typical element &amp;lt;math&amp;gt;z_{j}.&amp;lt;/math&amp;gt; What is the second element of the product &amp;lt;math&amp;gt;B\mathbf{z?}&amp;lt;/math&amp;gt; What is the &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;th element?&lt;br /&gt;
# (XS1 Q5 Part 2) If &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 1 &amp;amp; 3\\&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 4&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}_{1}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
1\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{b}_{2}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A\mathbf{b}_{1},&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A\mathbf{b}_{2}.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q5 Part 3) If &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 1 &amp;amp; 3\\&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 4&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}_{1}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
1\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{b}_{2}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; make &amp;lt;math&amp;gt;\mathbf{b}_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}_{2}&amp;lt;/math&amp;gt; the columns of a matrix &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; and find the matrix which is equal to &amp;lt;math&amp;gt;AB.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q5 Part 4) If &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 4&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 4\\&lt;br /&gt;
6 &amp;amp; 7&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA.&amp;lt;/math&amp;gt; What property do the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; have?&lt;br /&gt;
# (XS1 Q5 Part 5) If &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}=\left[\begin{array}{rr}&lt;br /&gt;
-3 &amp;amp; 2\\&lt;br /&gt;
4 &amp;amp; 9\\&lt;br /&gt;
1 &amp;amp; -2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A_{4}A_{10}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}A_{4}.&amp;lt;/math&amp;gt; Why are &amp;lt;math&amp;gt;A_{4}A_{10}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}A_{4}&amp;lt;/math&amp;gt; not equal?&lt;br /&gt;
# (XS1 Q5 Part 6) If &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{11}=\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 4 &amp;amp; 4\\&lt;br /&gt;
3 &amp;amp; 1 &amp;amp; -1\\&lt;br /&gt;
2 &amp;amp; 0 &amp;amp; 4&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A_{4}A_{11}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{11}A_{4}.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q5 Part 7) If &amp;lt;math&amp;gt;A_{11}=\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 4 &amp;amp; 4\\&lt;br /&gt;
3 &amp;amp; 1 &amp;amp; -1\\&lt;br /&gt;
2 &amp;amp; 0 &amp;amp; 4&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{12}=\left[\begin{array}{rrr}&lt;br /&gt;
3 &amp;amp; 0 &amp;amp; -1\\&lt;br /&gt;
-2 &amp;amp; -1 &amp;amp; -1\\&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 3&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; , find &amp;lt;math&amp;gt;A_{11}A_{12}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{12}A_{11}.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q5 Part 8) If &amp;lt;math&amp;gt;A_{1}=\left[\begin{array}{rrr}&lt;br /&gt;
4 &amp;amp; 0 &amp;amp; -3\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 4\\&lt;br /&gt;
2 &amp;amp; 2 &amp;amp; -11&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{8}=\left[\begin{array}{rrr}&lt;br /&gt;
4 &amp;amp; 2 &amp;amp; 6\\&lt;br /&gt;
1 &amp;amp; 3 &amp;amp; 4\\&lt;br /&gt;
5 &amp;amp; 0 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; , find &amp;lt;math&amp;gt;2A_{8}-3A_{1}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 5 Part 1, Part 2, Part 4, and Part 8.&lt;br /&gt;
&lt;br /&gt;
= Question 6: Transposition =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;(XS1 Q6 Part 1) Find the transpose of the following matrices:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A_{1} &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
-2 &amp;amp; 3\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right];\ \ \ A_{2}=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 9\\&lt;br /&gt;
6 &amp;amp; -2 &amp;amp; 15&lt;br /&gt;
\end{array}\right];\ \ \ A_{3}=\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
2 &amp;amp; -1\\&lt;br /&gt;
0 &amp;amp; 0&lt;br /&gt;
\end{array}\right];\\&lt;br /&gt;
A_{4} &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; -1\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; -1\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 1&lt;br /&gt;
\end{array}\right];\ \ \ A_{5}=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; -1\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; -1\\&lt;br /&gt;
-1 &amp;amp; -1 &amp;amp; 1&lt;br /&gt;
\end{array}\right];\ \ \ A_{6}=\left[\begin{array}{rr}&lt;br /&gt;
4 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5\\&lt;br /&gt;
-2 &amp;amp; 3&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;(XS1 Q6 Part 1) What is the dimension of &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; for each matrix in part (1)?&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Check that &amp;lt;math&amp;gt;\left(A^{T}\right)^{T}=A&amp;lt;/math&amp;gt; for each of the matrices in part (1).&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;(XS1 Q6 Part 1) What properties do the matrices &amp;lt;math&amp;gt;A_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{5}&amp;lt;/math&amp;gt; have that are not shared by the other matrices in part (1)?&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;(XS1 Q6 Part 5) If &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}=\left[\begin{array}{rr}&lt;br /&gt;
-3 &amp;amp; 2\\&lt;br /&gt;
4 &amp;amp; 9\\&lt;br /&gt;
1 &amp;amp; -2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A_{4}^{T}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}^{T}.&amp;lt;/math&amp;gt; Confirm that &amp;lt;math&amp;gt;\left(A_{4}A_{10}\right)^{T}=A_{10}^{T}A_{4}^{T}.&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 6 Part 1, Part 4, and Part 5.&lt;br /&gt;
&lt;br /&gt;
= Question 7: Special Matrices =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q7 Part 1) If &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; confirm that &amp;lt;math&amp;gt;I_{2}A_{4}=A_{4}I_{3}.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q7 Part 2) If &amp;lt;math&amp;gt;D=\left[\begin{array}{cc}&lt;br /&gt;
3 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 5&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}=\left[\begin{array}{rr}&lt;br /&gt;
-3 &amp;amp; 2\\&lt;br /&gt;
4 &amp;amp; 9\\&lt;br /&gt;
1 &amp;amp; -2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;DA_{4}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}D.&amp;lt;/math&amp;gt; What pattern can you detect in the results?&lt;br /&gt;
# (XS1 Q7 Part 3) If &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A_{4}A_{4}^{T}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{4}^{T}A_{4}.&amp;lt;/math&amp;gt; Are these two matrices equal? What property do these two matrices possess?&lt;br /&gt;
# (XS1 Q7 Part 4) If &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;\mathbf{xx}^{T}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\mathbf{yy}^{T}&amp;lt;/math&amp;gt;. Are these symmetric? Do they equal &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{x,}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{y}^{T}\mathbf{y?}&amp;lt;/math&amp;gt; Find &amp;lt;math&amp;gt;\mathbf{xy}^{T}:&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\mathbf{xy}^{T}&amp;lt;/math&amp;gt; equal to &amp;lt;math&amp;gt;\mathbf{yx}^{T}&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\mathbf{y}^{T}\mathbf{x?}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q7 Part 5) If &amp;lt;math&amp;gt;L=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
2 &amp;amp; 1 &amp;amp; 0\\&lt;br /&gt;
3 &amp;amp; 2 &amp;amp; 1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; show that &amp;lt;math&amp;gt;L^{T}&amp;lt;/math&amp;gt; is an upper triangular matrix.&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 7 Part 3 and 4.&lt;br /&gt;
&lt;br /&gt;
= Question 8: Partitioned matrices =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q8 Part 1) Write &amp;lt;math&amp;gt;A_{2}=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 1 &amp;amp; -3\\&lt;br /&gt;
6 &amp;amp; 0 &amp;amp; 10 &amp;amp; 9\\&lt;br /&gt;
2 &amp;amp; 0 &amp;amp; 3 &amp;amp; 4&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; in the form &amp;lt;math&amp;gt;A_{2}=\left[\begin{array}{rr}&lt;br /&gt;
B_{1} &amp;amp; B_{2}\end{array}\right],&amp;lt;/math&amp;gt; where both &amp;lt;math&amp;gt;B_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B_{2}&amp;lt;/math&amp;gt; have two columns. What are &amp;lt;math&amp;gt;B_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B_{2}?&amp;lt;/math&amp;gt; Write &amp;lt;math&amp;gt;A_{9}=\left[\begin{array}{rr}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
-1 &amp;amp; 3\\&lt;br /&gt;
4 &amp;amp; -2\\&lt;br /&gt;
7 &amp;amp; 7&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;A_{9}=\left[\begin{array}{r}&lt;br /&gt;
C_{1}\\&lt;br /&gt;
C_{2}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where both &amp;lt;math&amp;gt;C_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C_{2}&amp;lt;/math&amp;gt; have 2 rows. Express the product &amp;lt;math&amp;gt;A_{2}A_{9}&amp;lt;/math&amp;gt; in terms of &amp;lt;math&amp;gt;B_{1}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;B_{2},&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;C_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C_{2}.&amp;lt;/math&amp;gt; Do the same for the product &amp;lt;math&amp;gt;A_{9}A_{2},&amp;lt;/math&amp;gt; carefully stating the dimensions of any submatrices.&lt;br /&gt;
# (XS1 Q8 Part 1) If &amp;lt;math&amp;gt;A_{2}=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 1 &amp;amp; -3\\&lt;br /&gt;
6 &amp;amp; 0 &amp;amp; 10 &amp;amp; 9\\&lt;br /&gt;
2 &amp;amp; 0 &amp;amp; 3 &amp;amp; 4&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rr}&lt;br /&gt;
B_{1} &amp;amp; B_{2}\end{array}\right],&amp;lt;/math&amp;gt; where both &amp;lt;math&amp;gt;B_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B_{2}&amp;lt;/math&amp;gt; have two columns, and &amp;lt;math&amp;gt;\mathbf{z}=\left[\begin{array}{c}&lt;br /&gt;
5\\&lt;br /&gt;
1\\&lt;br /&gt;
-3\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
\mathbf{z}_{1}\\&lt;br /&gt;
\mathbf{z}_{2}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\mathbf{z}_{i}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;2\times1,&amp;lt;/math&amp;gt; given an expression for &amp;lt;math&amp;gt;A_{2}\mathbf{z}&amp;lt;/math&amp;gt; in terms of &amp;lt;math&amp;gt;B_{1},B_{2},\mathbf{z}_{1},\mathbf{z}_{2}.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q8 Part 3) If &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
A_{11} &amp;amp; A_{12}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{z}=\left[\begin{array}{r}&lt;br /&gt;
\mathbf{z}_{1}\\&lt;br /&gt;
\mathbf{z}_{2}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;A_{11}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;2\times2,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A_{12}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;2\times1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A_{21}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;1\times2,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A_{22}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{z}_{1}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;1\times1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{z}_{2}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;2\times1,&amp;lt;/math&amp;gt; is the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; well defined? Does the product &amp;lt;math&amp;gt;A\mathbf{z}=\left[\begin{array}{cc}&lt;br /&gt;
A_{11} &amp;amp; A_{12}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
\mathbf{z}_{1}\\&lt;br /&gt;
\mathbf{z}_{2}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
A_{11}\mathbf{z}_{1}+A_{12}\mathbf{z}_{2}\\&lt;br /&gt;
A_{21}\mathbf{z}_{1}+A_{22}\mathbf{z}_{2}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; exist in this form?&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 8 Part 1.&lt;br /&gt;
&lt;br /&gt;
= Question 9: Data matrices =&lt;br /&gt;
&lt;br /&gt;
A small data set on variables &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;D=\left[\begin{array}{rr}&lt;br /&gt;
14 &amp;amp; 2\\&lt;br /&gt;
17 &amp;amp; 4\\&lt;br /&gt;
8 &amp;amp; 3\\&lt;br /&gt;
16 &amp;amp; 5\\&lt;br /&gt;
3 &amp;amp; 2&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q9 Part 1) Define a vector of observations on &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; and a matrix &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; so that the two variable regression model &amp;lt;math&amp;gt;y_{i}=\alpha+\beta x_{i}+u_{i}&amp;lt;/math&amp;gt; can be represented, using the data in &amp;lt;math&amp;gt;D,&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;\mathbf{y}=X\boldsymbol{\delta}+\mathbf{u.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q9 Part 1) Compute for your choice of &amp;lt;math&amp;gt;X,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;X^{T}X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;X^{T}\mathbf{y.}&amp;lt;/math&amp;gt; Check your answers in Matlab.&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 9 Part 1.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Exercise_Sheet_2&amp;diff=3030</id>
		<title>Exercise Sheet 2</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Exercise_Sheet_2&amp;diff=3030"/>
				<updated>2013-09-10T14:30:36Z</updated>
		
		<summary type="html">&lt;p&gt;LG: Created page with &amp;quot;= Introduction =  Blackboard is the University’s virtual learning environment. All of the material for this module, including formative assessments, and summative assessment...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
Blackboard is the University’s virtual learning environment. All of the material for this module, including formative assessments, and summative assessments, will be available in Blackboard.&lt;br /&gt;
&lt;br /&gt;
You will normally enter the Blackboard site for this course through [https://my.manchester.ac.uk/ MyManchester] at https://my.manchester.ac.uk/ .&lt;br /&gt;
&lt;br /&gt;
All of the Blackboard material for the course is organised into Lecture topics, so that lecture notes, lecture slides, and exercise sheets can be found in the folder for each Lecture topic.&lt;br /&gt;
&lt;br /&gt;
The questions on each exercise sheet may be answered in the traditional way, on paper, and handed in to be marked. Alternatively you can answer the majority (but not all) of the exercise sheet questions online, using Maple TA. All of the questions on this exercise sheet can be answered in Maple TA. The name of the matching Maple TA assignment is indicated in each question in this exercise sheet.&lt;br /&gt;
&lt;br /&gt;
= Using Maple TA =&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. In this module, most of the questions on the paper Exercise Sheets are also available as Maple TA assignments. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
http://place36.placementtester.com/manchester&amp;lt;br /&amp;gt;and this link is also given in each Lecture folder, for convenience. Login with your registration number (first 7 digits only): the password is also your registration number. Once you have logged in, there is generally a wait whilst the assignments are loaded. On the page that follows, you can click on MyProfile and then Password Update to change your password.&lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON61001 Econometric Methods 2013-14&amp;lt;br /&amp;gt;by clicking on the entry for this course. This will bring up a page of assignments. You can click on the assignment you want to do - the notation follows that in the exercise sheets. The assignments are organised by question group, as in the Exercise Sheets, or by individual question - a component of the question group. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between “Print assignment for off-line work” or Work assignment on-line right now. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the Work ... online option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (&amp;lt;math&amp;gt;+,-,*,/,^&amp;lt;/math&amp;gt;) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;lt;math&amp;gt;1/x-1&amp;lt;/math&amp;gt; - is it &amp;lt;math&amp;gt;(1/x)-1&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1/(x-1)&amp;lt;/math&amp;gt;? Information about the entry of vectors and matrices in your answers is given in the next section, although the instructions are often repeated in questions.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and View Details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
Most of the Exercise Sheets have additional randomised questions associated with them in Maple TA. These are questions where Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These randomised questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be to hard, simply click on the Refresh button at the top of the page to get another question.&lt;br /&gt;
&lt;br /&gt;
= Using the equation editor in Maple TA =&lt;br /&gt;
&lt;br /&gt;
Usually it is quicker to enter your answers directly in Maple TA rather than using the Equation Editor. Using the Equation Editor is straightforward: see Figure [fig:The-Equation-Editor]. To select a symbol, click on the required panel and select the symbol required. Figure [fig:Subscripts-and-superscripts.] shows the subscript and superscript panel, and Figure [fig:Arrays.] shows the array selection panel. Usage is fairly self-evident - keep trying until you find the required symbol.&lt;br /&gt;
&lt;br /&gt;
[[Image:1C__maple_local_screenshots_equation_editor001.png|image]]&lt;br /&gt;
&lt;br /&gt;
[th]&lt;br /&gt;
&lt;br /&gt;
[[Image:2C__maple_local_screenshots_equation_editor_subscripts.png|image]]&lt;br /&gt;
&lt;br /&gt;
[th]&lt;br /&gt;
&lt;br /&gt;
[[Image:3C__maple_local_screenshots_equation_editor_arrays.png|image]]&lt;br /&gt;
&lt;br /&gt;
= Entering vectors and matrices into Maple TA =&lt;br /&gt;
&lt;br /&gt;
Many questions on this exercise sheet require vectors, matrices or indexed expression as their answers. Some questions provide an array of the right size for the answer, so that none of the methods below are required.&lt;br /&gt;
&lt;br /&gt;
There are a variety of ways of entering these expressions into Maple TA - choose the way you feel most comfortable in using.&lt;br /&gt;
&lt;br /&gt;
# Vector entry.&lt;br /&gt;
## You can use Maple “text entry” for column vectors, e.g. for the &amp;lt;math&amp;gt;3\times1&amp;lt;/math&amp;gt; vector,: &amp;lt;math&amp;gt;\left(\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right),&amp;lt;/math&amp;gt; enter (without the quotes) Vector(&amp;lt;math&amp;gt;[1,2,3]&amp;lt;/math&amp;gt;).&lt;br /&gt;
## A quicker version is &amp;lt;math&amp;gt;&amp;lt;1,2,3&amp;gt;&amp;lt;/math&amp;gt;.&lt;br /&gt;
## For a row vector, use &amp;lt;math&amp;gt;&amp;lt;1|2|3&amp;gt;&amp;lt;/math&amp;gt;, as above, or Vector[row](&amp;lt;math&amp;gt;[1,2,3]&amp;lt;/math&amp;gt;).&lt;br /&gt;
## Or, you can right click in the “Equation Editor”, select the array object, choose the correct matrix size and enter the elements of the vector.&lt;br /&gt;
# Matrix entry.&lt;br /&gt;
## You can use Maple “text entry” for matrices, e.g. for a &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix, enter (without the quotes) Matrix(&amp;lt;math&amp;gt;[[1,2,3],[345]]&amp;lt;/math&amp;gt;) for the matrix: &amp;lt;math&amp;gt;\left(\begin{array}{ccc}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 3\\&lt;br /&gt;
4 &amp;amp; 5 &amp;amp; 6&lt;br /&gt;
\end{array}\right).&amp;lt;/math&amp;gt; For more columns, include more elements in each [.] block. For more rows, add more [.] blocks, each separated by a comma. This is “row - by - row” construction.&lt;br /&gt;
## A quicker version is &amp;lt;math&amp;gt;&amp;lt;1,2,3;4,5,6&amp;gt;&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Column by column construction is also possible using &amp;lt;math&amp;gt;&amp;lt;1,4|2,5|3,6&amp;gt;&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;&amp;lt;&amp;lt;1,4&amp;gt;|&amp;lt;2,5&amp;gt;|&amp;lt;3,6&amp;gt;&amp;gt;&amp;lt;/math&amp;gt;.&lt;br /&gt;
## A matrix with one row, say &amp;lt;math&amp;gt;\left[\begin{array}{ccc}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 3\end{array}\right]&amp;lt;/math&amp;gt; can be entered as Matrix(&amp;lt;math&amp;gt;[1,2,3]&amp;lt;/math&amp;gt;), or as &amp;lt;math&amp;gt;&amp;lt;1|2|3&amp;gt;&amp;lt;/math&amp;gt;.&lt;br /&gt;
## A matrix with one column, say &amp;lt;math&amp;gt;\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
4&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; can be entered as Matrix(&amp;lt;math&amp;gt;[[1],[4]]&amp;lt;/math&amp;gt;) or as &amp;lt;math&amp;gt;&amp;lt;1,4&amp;gt;&amp;lt;/math&amp;gt;.&lt;br /&gt;
## Or, you can right click in the “Equation Editor”, select the array object, choose the correct matrix size and enter the elements of the product.&lt;br /&gt;
# Entering indexed elements in vectors and matrices - the methods below can be used with any of the vector or matrix entry schemes.&lt;br /&gt;
## To insert &amp;lt;math&amp;gt;x_{1}&amp;lt;/math&amp;gt; into an answer, you can use &amp;lt;math&amp;gt;x[1]&amp;lt;/math&amp;gt;, or you can use the Equation Editor. NB: select the right palette in the equation editor &amp;#039;&amp;#039;&amp;#039;before&amp;#039;&amp;#039;&amp;#039; entering any symbols!&lt;br /&gt;
## To insert &amp;lt;math&amp;gt;x_{11},&amp;lt;/math&amp;gt; use x[1,1], or use the Equation Editor, subject to the previous warning.&lt;br /&gt;
&lt;br /&gt;
On the whole, the entry methods using &amp;lt;math&amp;gt;&amp;lt;...&amp;gt;&amp;lt;/math&amp;gt; are the quickest to use.&lt;br /&gt;
&lt;br /&gt;
Arithmetic operators are best shown explicitly in entering answers. These operators are the standard ones, &amp;lt;math&amp;gt;+,-,*,/&amp;lt;/math&amp;gt; and .&lt;br /&gt;
&lt;br /&gt;
= Question 1: Vectors =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q1 Part 1) Given the vectors: &amp;lt;math&amp;gt;\mathbf{0}=\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{z}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find: &amp;lt;math&amp;gt;\mathbf{0}-\mathbf{x,x}+\mathbf{y,\mathbf{x-y},x-}2\mathbf{z,}3\mathbf{z}-2\mathbf{y,x}-2\mathbf{y}+\mathbf{z.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q1 Part 2) Can you find a linear combination &amp;lt;math&amp;gt;\alpha\mathbf{x}+\beta\mathbf{y}&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;\mathbf{x=}\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y=}\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; to generate (in turn) the vectors: &amp;lt;math&amp;gt;\mathbf{w}_{1}=\left[\begin{array}{c}&lt;br /&gt;
4\\&lt;br /&gt;
-2&lt;br /&gt;
\end{array}\right],\mathbf{w}_{2}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
4&lt;br /&gt;
\end{array}\right],\mathbf{w}_{3}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right]?&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 1 Part 1 and Part 2.&lt;br /&gt;
&lt;br /&gt;
= Question 2: 3 - d vectors =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q2 Part 2) For the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;-2\mathbf{x,\ }3\mathbf{x}+2\mathbf{y,\ }4\mathbf{x}-2\mathbf{y.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q2 Part 2) For the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
y_{n}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;10\mathbf{x}-\alpha\mathbf{y,}\beta\mathbf{y.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q2 Part 3) Use the vector &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and the vector &amp;lt;math&amp;gt;\mathbf{z}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; to find &amp;lt;math&amp;gt;2\mathbf{x-z,}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{x}+3\mathbf{z.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 2 Part 1 and Part 2.&lt;br /&gt;
&lt;br /&gt;
= Question 3: Matrices =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q3 Part 1) Arrange the vectors: &amp;lt;math&amp;gt;\mathbf{0}=\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{z}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt; Find a suitable vector &amp;lt;math&amp;gt;\boldsymbol{\alpha}&amp;lt;/math&amp;gt; so that the product &amp;lt;math&amp;gt;A\boldsymbol{\alpha}&amp;lt;/math&amp;gt; is equal to the following linear combinations in turn: &amp;lt;math&amp;gt;\mathbf{0}-\mathbf{x,\ x}+\mathbf{y,\ \mathbf{x-y},\ 2z}-\mathbf{x,}&amp;lt;/math&amp;gt;&amp;#039;&amp;#039;&amp;#039; &amp;#039;&amp;#039;&amp;#039;&amp;lt;math&amp;gt;\mathbf{y}-2\mathbf{z,\ x}+\mathbf{y}-\mathbf{z.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q3 Part 2) Arrange the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt; Find a suitable vector &amp;lt;math&amp;gt;\boldsymbol{\alpha}&amp;lt;/math&amp;gt; so that the product &amp;lt;math&amp;gt;A\boldsymbol{\alpha}&amp;lt;/math&amp;gt; is equal to the following linear combinations in turn: &amp;lt;math&amp;gt;-2\mathbf{x,\ }3\mathbf{x}+2\mathbf{y,\ }4\mathbf{x}-2\mathbf{y.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q3 Part 3) Arrange the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{z}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q3 Part 4) Arrange the vectors &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{z}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\boldsymbol{\alpha}=\left[\begin{array}{c}&lt;br /&gt;
2\\&lt;br /&gt;
1\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A\boldsymbol{\alpha.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q3 Part 5) Arrange the vectors : &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{w}=\left[\begin{array}{c}&lt;br /&gt;
-1\\&lt;br /&gt;
-1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\boldsymbol{\alpha}=\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
-1/2\\&lt;br /&gt;
-1/2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A\boldsymbol{\alpha.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q3 Part 6) Arrange the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{w}=\left[\begin{array}{c}&lt;br /&gt;
-1\\&lt;br /&gt;
-1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{v}=\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as the columns of a matrix &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; What is the dimension of &amp;lt;math&amp;gt;A?&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\boldsymbol{\alpha}=\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
1/2\\&lt;br /&gt;
1/2\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A\boldsymbol{\alpha}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 3 Part 1, Part 2, Part 4, Part 5 and Part 6.&lt;br /&gt;
&lt;br /&gt;
= Question 4: Inner products =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q4 Part 1) Find the inner product of the vectors: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; What is &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y^{T}y?}&amp;lt;/math&amp;gt; Is it true that &amp;lt;math&amp;gt;\left(\mathbf{x}^{T}\mathbf{y}\right)^{2}\leq&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{x\times y^{T}y?}&amp;lt;/math&amp;gt; This is called the Cauchy-Schwartz Inequality; equality holds only when &amp;lt;math&amp;gt;\mathbf{x}=\lambda\mathbf{y,}&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;\lambda.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q4 Part 2) Find a vector &amp;lt;math&amp;gt;\mathbf{z}&amp;lt;/math&amp;gt; with the property that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{z}=0&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Draw a diagram showing &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{z:}&amp;lt;/math&amp;gt; how would you describe the relationship between &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{z?}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q4 Part 3) For: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
1/3\\&lt;br /&gt;
1/3\\&lt;br /&gt;
1/3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{z}=\left[\begin{array}{c}&lt;br /&gt;
6\\&lt;br /&gt;
2\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y,\ x}^{T}\mathbf{z,\ y}^{T}\mathbf{z.}&amp;lt;/math&amp;gt; How would you describe the value of these inner products from a statistical perspective?&lt;br /&gt;
# (XS1 Q4 Part 4) If &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{10}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; what is &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{x?}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q4 Part 5) If &amp;lt;math&amp;gt;\mathbf{z}=\left[\begin{array}{c}&lt;br /&gt;
3\\&lt;br /&gt;
7\\&lt;br /&gt;
1\\&lt;br /&gt;
9&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{1}_{4}&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector with every element equal to &amp;lt;math&amp;gt;1,&amp;lt;/math&amp;gt; what is the quantity &amp;lt;math&amp;gt;c=\dfrac{1}{4}\mathbf{1}_{4}^{T}\mathbf{z?}&amp;lt;/math&amp;gt; Find the elements of the vector &amp;lt;math&amp;gt;\mathbf{z}-c\mathbf{1}_{4}.&amp;lt;/math&amp;gt; From a statistical perspective, what are the elements of this vector? Find the value of the inner product &amp;lt;math&amp;gt;\mathbf{1}_{4}^{T}\left(\mathbf{z}-c\mathbf{1}_{4}\right).&amp;lt;/math&amp;gt; What statistical information does this illustrate?&lt;br /&gt;
# (XS1 Q4 Part 6) Using &amp;lt;math&amp;gt;\mathbf{z}=\left[\begin{array}{c}&lt;br /&gt;
3\\&lt;br /&gt;
7\\&lt;br /&gt;
1\\&lt;br /&gt;
9&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{1}_{4}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;c&amp;lt;/math&amp;gt; from part (5), what is the inner product of &amp;lt;math&amp;gt;\left(\mathbf{z}-c\mathbf{1}_{4}\right)&amp;lt;/math&amp;gt; with itself? If this distance is divided by &amp;lt;math&amp;gt;3&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;4,&amp;lt;/math&amp;gt; what statistical quantity is the result?&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 4 Part 1, Part 2, Part 3, and Part 4.&lt;br /&gt;
&lt;br /&gt;
= Question 5: Across and down =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q5 Part 1) Suppose that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;r\times s&amp;lt;/math&amp;gt; matrix with typical element &amp;lt;math&amp;gt;\left\Vert b_{ij}\right\Vert ,&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{z}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;s\times1&amp;lt;/math&amp;gt; vector, with typical element &amp;lt;math&amp;gt;z_{j}.&amp;lt;/math&amp;gt; What is the second element of the product &amp;lt;math&amp;gt;B\mathbf{z?}&amp;lt;/math&amp;gt; What is the &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;th element?&lt;br /&gt;
# (XS1 Q5 Part 2) If &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 1 &amp;amp; 3\\&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 4&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}_{1}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
1\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{b}_{2}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A\mathbf{b}_{1},&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A\mathbf{b}_{2}.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q5 Part 3) If &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 1 &amp;amp; 3\\&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 4&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}_{1}=\left[\begin{array}{r}&lt;br /&gt;
4\\&lt;br /&gt;
1\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{b}_{2}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; make &amp;lt;math&amp;gt;\mathbf{b}_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}_{2}&amp;lt;/math&amp;gt; the columns of a matrix &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; and find the matrix which is equal to &amp;lt;math&amp;gt;AB.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q5 Part 4) If &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 4&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 4\\&lt;br /&gt;
6 &amp;amp; 7&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA.&amp;lt;/math&amp;gt; What property do the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; have?&lt;br /&gt;
# (XS1 Q5 Part 5) If &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}=\left[\begin{array}{rr}&lt;br /&gt;
-3 &amp;amp; 2\\&lt;br /&gt;
4 &amp;amp; 9\\&lt;br /&gt;
1 &amp;amp; -2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A_{4}A_{10}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}A_{4}.&amp;lt;/math&amp;gt; Why are &amp;lt;math&amp;gt;A_{4}A_{10}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}A_{4}&amp;lt;/math&amp;gt; not equal?&lt;br /&gt;
# (XS1 Q5 Part 6) If &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{11}=\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 4 &amp;amp; 4\\&lt;br /&gt;
3 &amp;amp; 1 &amp;amp; -1\\&lt;br /&gt;
2 &amp;amp; 0 &amp;amp; 4&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A_{4}A_{11}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{11}A_{4}.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q5 Part 7) If &amp;lt;math&amp;gt;A_{11}=\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 4 &amp;amp; 4\\&lt;br /&gt;
3 &amp;amp; 1 &amp;amp; -1\\&lt;br /&gt;
2 &amp;amp; 0 &amp;amp; 4&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{12}=\left[\begin{array}{rrr}&lt;br /&gt;
3 &amp;amp; 0 &amp;amp; -1\\&lt;br /&gt;
-2 &amp;amp; -1 &amp;amp; -1\\&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 3&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; , find &amp;lt;math&amp;gt;A_{11}A_{12}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{12}A_{11}.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q5 Part 8) If &amp;lt;math&amp;gt;A_{1}=\left[\begin{array}{rrr}&lt;br /&gt;
4 &amp;amp; 0 &amp;amp; -3\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 4\\&lt;br /&gt;
2 &amp;amp; 2 &amp;amp; -11&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{8}=\left[\begin{array}{rrr}&lt;br /&gt;
4 &amp;amp; 2 &amp;amp; 6\\&lt;br /&gt;
1 &amp;amp; 3 &amp;amp; 4\\&lt;br /&gt;
5 &amp;amp; 0 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; , find &amp;lt;math&amp;gt;2A_{8}-3A_{1}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 5 Part 1, Part 2, Part 4, and Part 8.&lt;br /&gt;
&lt;br /&gt;
= Question 6: Transposition =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;(XS1 Q6 Part 1) Find the transpose of the following matrices:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A_{1} &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
-2 &amp;amp; 3\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right];\ \ \ A_{2}=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 9\\&lt;br /&gt;
6 &amp;amp; -2 &amp;amp; 15&lt;br /&gt;
\end{array}\right];\ \ \ A_{3}=\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
2 &amp;amp; -1\\&lt;br /&gt;
0 &amp;amp; 0&lt;br /&gt;
\end{array}\right];\\&lt;br /&gt;
A_{4} &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; -1\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; -1\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 1&lt;br /&gt;
\end{array}\right];\ \ \ A_{5}=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; -1\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; -1\\&lt;br /&gt;
-1 &amp;amp; -1 &amp;amp; 1&lt;br /&gt;
\end{array}\right];\ \ \ A_{6}=\left[\begin{array}{rr}&lt;br /&gt;
4 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5\\&lt;br /&gt;
-2 &amp;amp; 3&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;(XS1 Q6 Part 1) What is the dimension of &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; for each matrix in part (1)?&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Check that &amp;lt;math&amp;gt;\left(A^{T}\right)^{T}=A&amp;lt;/math&amp;gt; for each of the matrices in part (1).&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;(XS1 Q6 Part 1) What properties do the matrices &amp;lt;math&amp;gt;A_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{5}&amp;lt;/math&amp;gt; have that are not shared by the other matrices in part (1)?&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;(XS1 Q6 Part 5) If &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}=\left[\begin{array}{rr}&lt;br /&gt;
-3 &amp;amp; 2\\&lt;br /&gt;
4 &amp;amp; 9\\&lt;br /&gt;
1 &amp;amp; -2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A_{4}^{T}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}^{T}.&amp;lt;/math&amp;gt; Confirm that &amp;lt;math&amp;gt;\left(A_{4}A_{10}\right)^{T}=A_{10}^{T}A_{4}^{T}.&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 6 Part 1, Part 4, and Part 5.&lt;br /&gt;
&lt;br /&gt;
= Question 7: Special Matrices =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q7 Part 1) If &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; confirm that &amp;lt;math&amp;gt;I_{2}A_{4}=A_{4}I_{3}.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q7 Part 2) If &amp;lt;math&amp;gt;D=\left[\begin{array}{cc}&lt;br /&gt;
3 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 5&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}=\left[\begin{array}{rr}&lt;br /&gt;
-3 &amp;amp; 2\\&lt;br /&gt;
4 &amp;amp; 9\\&lt;br /&gt;
1 &amp;amp; -2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;DA_{4}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{10}D.&amp;lt;/math&amp;gt; What pattern can you detect in the results?&lt;br /&gt;
# (XS1 Q7 Part 3) If &amp;lt;math&amp;gt;A_{4}=\left[\begin{array}{rrr}&lt;br /&gt;
-3 &amp;amp; 9 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 5 &amp;amp; 2&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;A_{4}A_{4}^{T}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A_{4}^{T}A_{4}.&amp;lt;/math&amp;gt; Are these two matrices equal? What property do these two matrices possess?&lt;br /&gt;
# (XS1 Q7 Part 4) If &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
-2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
-1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; find &amp;lt;math&amp;gt;\mathbf{xx}^{T}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\mathbf{yy}^{T}&amp;lt;/math&amp;gt;. Are these symmetric? Do they equal &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{x,}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{y}^{T}\mathbf{y?}&amp;lt;/math&amp;gt; Find &amp;lt;math&amp;gt;\mathbf{xy}^{T}:&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\mathbf{xy}^{T}&amp;lt;/math&amp;gt; equal to &amp;lt;math&amp;gt;\mathbf{yx}^{T}&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\mathbf{y}^{T}\mathbf{x?}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q7 Part 5) If &amp;lt;math&amp;gt;L=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
2 &amp;amp; 1 &amp;amp; 0\\&lt;br /&gt;
3 &amp;amp; 2 &amp;amp; 1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; show that &amp;lt;math&amp;gt;L^{T}&amp;lt;/math&amp;gt; is an upper triangular matrix.&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 7 Part 3 and 4.&lt;br /&gt;
&lt;br /&gt;
= Question 8: Partitioned matrices =&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q8 Part 1) Write &amp;lt;math&amp;gt;A_{2}=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 1 &amp;amp; -3\\&lt;br /&gt;
6 &amp;amp; 0 &amp;amp; 10 &amp;amp; 9\\&lt;br /&gt;
2 &amp;amp; 0 &amp;amp; 3 &amp;amp; 4&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; in the form &amp;lt;math&amp;gt;A_{2}=\left[\begin{array}{rr}&lt;br /&gt;
B_{1} &amp;amp; B_{2}\end{array}\right],&amp;lt;/math&amp;gt; where both &amp;lt;math&amp;gt;B_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B_{2}&amp;lt;/math&amp;gt; have two columns. What are &amp;lt;math&amp;gt;B_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B_{2}?&amp;lt;/math&amp;gt; Write &amp;lt;math&amp;gt;A_{9}=\left[\begin{array}{rr}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
-1 &amp;amp; 3\\&lt;br /&gt;
4 &amp;amp; -2\\&lt;br /&gt;
7 &amp;amp; 7&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;A_{9}=\left[\begin{array}{r}&lt;br /&gt;
C_{1}\\&lt;br /&gt;
C_{2}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where both &amp;lt;math&amp;gt;C_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C_{2}&amp;lt;/math&amp;gt; have 2 rows. Express the product &amp;lt;math&amp;gt;A_{2}A_{9}&amp;lt;/math&amp;gt; in terms of &amp;lt;math&amp;gt;B_{1}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;B_{2},&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;C_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C_{2}.&amp;lt;/math&amp;gt; Do the same for the product &amp;lt;math&amp;gt;A_{9}A_{2},&amp;lt;/math&amp;gt; carefully stating the dimensions of any submatrices.&lt;br /&gt;
# (XS1 Q8 Part 1) If &amp;lt;math&amp;gt;A_{2}=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 1 &amp;amp; -3\\&lt;br /&gt;
6 &amp;amp; 0 &amp;amp; 10 &amp;amp; 9\\&lt;br /&gt;
2 &amp;amp; 0 &amp;amp; 3 &amp;amp; 4&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rr}&lt;br /&gt;
B_{1} &amp;amp; B_{2}\end{array}\right],&amp;lt;/math&amp;gt; where both &amp;lt;math&amp;gt;B_{1}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B_{2}&amp;lt;/math&amp;gt; have two columns, and &amp;lt;math&amp;gt;\mathbf{z}=\left[\begin{array}{c}&lt;br /&gt;
5\\&lt;br /&gt;
1\\&lt;br /&gt;
-3\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
\mathbf{z}_{1}\\&lt;br /&gt;
\mathbf{z}_{2}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\mathbf{z}_{i}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;2\times1,&amp;lt;/math&amp;gt; given an expression for &amp;lt;math&amp;gt;A_{2}\mathbf{z}&amp;lt;/math&amp;gt; in terms of &amp;lt;math&amp;gt;B_{1},B_{2},\mathbf{z}_{1},\mathbf{z}_{2}.&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q8 Part 3) If &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
A_{11} &amp;amp; A_{12}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{z}=\left[\begin{array}{r}&lt;br /&gt;
\mathbf{z}_{1}\\&lt;br /&gt;
\mathbf{z}_{2}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;A_{11}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;2\times2,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A_{12}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;2\times1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A_{21}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;1\times2,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A_{22}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{z}_{1}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;1\times1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{z}_{2}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;2\times1,&amp;lt;/math&amp;gt; is the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; well defined? Does the product &amp;lt;math&amp;gt;A\mathbf{z}=\left[\begin{array}{cc}&lt;br /&gt;
A_{11} &amp;amp; A_{12}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
\mathbf{z}_{1}\\&lt;br /&gt;
\mathbf{z}_{2}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
A_{11}\mathbf{z}_{1}+A_{12}\mathbf{z}_{2}\\&lt;br /&gt;
A_{21}\mathbf{z}_{1}+A_{22}\mathbf{z}_{2}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; exist in this form?&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 8 Part 1.&lt;br /&gt;
&lt;br /&gt;
= Question 9: Data matrices =&lt;br /&gt;
&lt;br /&gt;
A small data set on variables &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;D=\left[\begin{array}{rr}&lt;br /&gt;
14 &amp;amp; 2\\&lt;br /&gt;
17 &amp;amp; 4\\&lt;br /&gt;
8 &amp;amp; 3\\&lt;br /&gt;
16 &amp;amp; 5\\&lt;br /&gt;
3 &amp;amp; 2&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# (XS1 Q9 Part 1) Define a vector of observations on &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; and a matrix &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; so that the two variable regression model &amp;lt;math&amp;gt;y_{i}=\alpha+\beta x_{i}+u_{i}&amp;lt;/math&amp;gt; can be represented, using the data in &amp;lt;math&amp;gt;D,&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;\mathbf{y}=X\boldsymbol{\delta}+\mathbf{u.}&amp;lt;/math&amp;gt;&lt;br /&gt;
# (XS1 Q9 Part 1) Compute for your choice of &amp;lt;math&amp;gt;X,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;X^{T}X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;X^{T}\mathbf{y.}&amp;lt;/math&amp;gt; Check your answers in Matlab.&lt;br /&gt;
&lt;br /&gt;
Additional “randomised” questions in Maple TA are in ExSheet2 Randomised Questions. The individual questions are XS2 Random Question 9 Part 1.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3029</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3029"/>
				<updated>2013-09-10T14:26:51Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. The pdf files containing this material are &lt;br /&gt;
&lt;br /&gt;
[[Media:Lecture 2.pdf]]&lt;br /&gt;
&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
[[Media:L2_slide_ho.pdf]]&lt;br /&gt;
&lt;br /&gt;
respectively.&lt;br /&gt;
&lt;br /&gt;
The lecture notes are also available here,&lt;br /&gt;
&lt;br /&gt;
[[Lecture Notes 2|Lnotes]]&lt;br /&gt;
&lt;br /&gt;
and the corresponding exercise sheet is&lt;br /&gt;
&lt;br /&gt;
[[Exercise Sheet 2|XS2]]&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. The link to this material is&lt;br /&gt;
&lt;br /&gt;
[[Media:Xs2.pdf]]&lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material, and test their understanding using Maple TA.&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester] http://place36.placementtester.com/manchester&lt;br /&gt;
&lt;br /&gt;
Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments: there is usually a delay whilst they are loaded. You can click on the assignment you want to do - the notation follows that in the exercise sheet. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of each of the question groups. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+, -, *, /, ^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;1/x - 1&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in the Exercise Sheet.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be too hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=LNotes&amp;diff=3028</id>
		<title>LNotes</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=LNotes&amp;diff=3028"/>
				<updated>2013-09-10T14:15:01Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Matrices =&lt;br /&gt;
&lt;br /&gt;
In the PreSession Maths course, a matrix was defined as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;A matrix is a rectangular array of numbers enclosed in parentheses, conventionally denoted by a capital letter. The number of rows (say &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt;) and&lt;br /&gt;
&lt;br /&gt;
the number of columns (say &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;) determine the order of the matrix (&amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\times&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;).&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
Two examples were given:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
P &amp;amp; =\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 3 &amp;amp; 4\\&lt;br /&gt;
3 &amp;amp; 1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ Q=\left[\begin{array}{rr}&lt;br /&gt;
2 &amp;amp; 3\\&lt;br /&gt;
4 &amp;amp; 3\\&lt;br /&gt;
1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
matrices of dimensions &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;3\times2&amp;lt;/math&amp;gt; respectively.&lt;br /&gt;
&lt;br /&gt;
Why study matrices for econometrics? Basically because a data set of several variables, e.g. on the weights and heights of 12 students, can be thought of as a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
D &amp;amp; =\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The properties of matrices can then be used to facilitate answering all the usual questions of econometrics - list not given here!&lt;br /&gt;
&lt;br /&gt;
Calculations with matrices with explicit numerical elements, as in the examples above is called matrix &amp;#039;&amp;#039;arithmetic&amp;#039;&amp;#039;. Matrix &amp;#039;&amp;#039;algebra&amp;#039;&amp;#039; is the algebra of matrices where the elements are not made explicit: this is what is really required for econometrics, as we shall see.&lt;br /&gt;
&lt;br /&gt;
As an example of this, a &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix might be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{ccc}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and would equal &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; above if the collection of &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; were given appropriate numerical values.&lt;br /&gt;
&lt;br /&gt;
A general &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is also a &amp;#039;&amp;#039;typical element &amp;#039;&amp;#039;notation for matrices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left\Vert a_{ij}\right\Vert ,\ \ \ \ \ i=1,...,m,j=1,...,n,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; is the element at the intersection of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row and &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th column in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;m\neq n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;#039;&amp;#039;rectangular &amp;#039;&amp;#039;matrix; when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a square matrix, having the same number of rows or columns.&lt;br /&gt;
&lt;br /&gt;
== Rows, columns and vectors ==&lt;br /&gt;
&lt;br /&gt;
Clearly, there is no reason why &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; cannot equal 1: so, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix with &amp;lt;math&amp;gt;n=1,&amp;lt;/math&amp;gt; i.e. with one column, is usually called a column vector. Similarly, a matrix with one row is a row vector.&lt;br /&gt;
&lt;br /&gt;
There are a lot of advantages to thinking of matrices as collections of row or column vectors, as we shall see. As an example, define the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; column vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\mathbf{,\ \ \ b}=\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and arrange as the columns of the &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\label{eq:axy}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, a column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; elements can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What happens when both &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; are equal to &amp;lt;math&amp;gt;1?&amp;lt;/math&amp;gt; Then, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, but it is also considered to be a real number, or &amp;#039;&amp;#039;scalar&amp;#039;&amp;#039; in the language of linear algebra:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[a_{11}\right]=a_{11}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is perhaps a little odd, but turns out to be a useful convention in a number of situations.&lt;br /&gt;
&lt;br /&gt;
== Transposition of vectors ==&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; in equation (1) can be seen as elements of column vectors, say:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],\ \ \ \boldsymbol{d}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This representation of row vectors as column vectors is a bit clumsy, so some transformation which converts a column vector into a row vector, and vice versa would be useful. The process of converting a column vector &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; into a row vector is called &amp;#039;&amp;#039;transposition, &amp;#039;&amp;#039;and the transposed version of &amp;lt;math&amp;gt;\mathbf{c}&amp;lt;/math&amp;gt; is denoted:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c}^{T} &amp;amp; =\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; superscript denoting transposition. In practice, a prime, &amp;lt;math&amp;gt;^{\prime},&amp;lt;/math&amp;gt; is used instead of &amp;lt;math&amp;gt;^{T}.&amp;lt;/math&amp;gt; However, whilst the prime is much simpler to write than the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; sign, it is also much easier to lose track of in writing out long or complicated expressions. So, it is best initially to use &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; to denote transposition rather than the prime &amp;lt;math&amp;gt;^{\prime}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can then be written via its rows as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
\mathbf{c}^{T}\\&lt;br /&gt;
\boldsymbol{d}^{T}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The same ideas can be applied to the matrices &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Q.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Operations with matrices =&lt;br /&gt;
&lt;br /&gt;
== Addition, subtraction and scalar multiplication ==&lt;br /&gt;
&lt;br /&gt;
For vectors, addition and subtraction are defined only for vectors of the same dimensions. If:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
y_{n}&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x+y} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}+y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}+y_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{x-y}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}-y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}-y_{n}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clearly, the addition or subtraction operation is &amp;#039;&amp;#039;elementwise. &amp;#039;&amp;#039;If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; have different dimensions, there will be some elements left over once all the elements of the smaller dimensioned vector have been used up.&lt;br /&gt;
&lt;br /&gt;
Another operation is &amp;#039;&amp;#039;scalar multiplication&amp;#039;&amp;#039;: if &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; is a real number or scalar, the product &amp;lt;math&amp;gt;\lambda\mathbf{x}&amp;lt;/math&amp;gt; is defined as: &amp;lt;math&amp;gt;\lambda\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that every element of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is multiplied by the same scalar &amp;lt;math&amp;gt;\lambda.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The two types of operation can be combined into the &amp;#039;&amp;#039;linear combination&amp;#039;&amp;#039; of vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right]+\left[\begin{array}{c}&lt;br /&gt;
\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mu y_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}+\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}+\mu y_{n}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equally, one can define the linear combination of vectors &amp;lt;math&amp;gt;\mathbf{x,y,}\ldots,\mathbf{z}&amp;lt;/math&amp;gt; by scalars &amp;lt;math&amp;gt;\lambda,\mu,\ldots,\nu&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}+\ldots+\nu\mathbf{z}&amp;lt;/math&amp;gt; with typical element: &amp;lt;math&amp;gt;\lambda x_{i}+\mu y_{i}+\ldots+\nu z_{i},&amp;lt;/math&amp;gt; provided that all the vectors have the same dimension.&lt;br /&gt;
&lt;br /&gt;
For matrices, these ideas carry over immediately: apply to each column of the matrices involved. For example, if &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{b}_{n}\end{array}\right],&amp;lt;/math&amp;gt; both &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; then addition and subtraction are defined elementwise, as for vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A+B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}+\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}+\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}+b_{ij}\right\Vert ,\\&lt;br /&gt;
A-B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}-\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}-\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}-b_{ij}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scalar multiplication of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; involves multiplying every column vector of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda,&amp;lt;/math&amp;gt; and therefore multiplying every element of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda A=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}\right\Vert .&amp;lt;/math&amp;gt; With the same idea for &amp;lt;math&amp;gt;B,&amp;lt;/math&amp;gt; the linear combination of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mu&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\lambda A+\mu B=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1}+\mu\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}+\mu\mathbf{b}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}+\mu b_{ij}\right\Vert .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, consider the matrices: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\lambda=1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mu=-2:&amp;lt;/math&amp;gt; then:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\lambda A+\mu B &amp;amp; = &amp;amp; A-2B\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
4 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; 7&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - vector products ==&lt;br /&gt;
&lt;br /&gt;
=== Inner product ===&lt;br /&gt;
&lt;br /&gt;
The simplest form of a matrix vector product is the case where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; consists of one row, so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;1\times n&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\mathbf{a}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right].&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the product &amp;lt;math&amp;gt;A\mathbf{x}=\mathbf{a}^{T}\mathbf{x}&amp;lt;/math&amp;gt; is called the &amp;#039;&amp;#039;inner product&amp;#039;&amp;#039; and is defined as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a}^{T}\mathbf{x} &amp;amp; =a_{1}x_{1}+\ldots+a_{n}x_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that the definition amounts to multiplying corresponding elements in &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and adding up the resultant products. Writing: &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x=}\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=a_{1}x_{1}+\ldots+a_{n}x_{n}&amp;lt;/math&amp;gt; motivates the familiar description of the &amp;#039;&amp;#039;across and down rule &amp;#039;&amp;#039;for this product: &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; is the ’multiply corresponding elements’ part of the definition.&lt;br /&gt;
&lt;br /&gt;
Notice that the result of the inner product is a real number, for example: &amp;lt;math&amp;gt;\mathbf{c}^{T}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{c}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=36+6=42.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, in the product &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x,}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have the same number of elements, &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; say, for the product to be defined. If &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; had different numbers of elements, there would be some elements of &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; left over or not used in the product: e.g.: &amp;lt;math&amp;gt;\mathbf{b}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{x=}\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; When the inner product of two vectors is defined, the vectors are said to be &amp;#039;&amp;#039;conformable&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Orthogonality ==&lt;br /&gt;
&lt;br /&gt;
Two vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; with the property that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0&amp;lt;/math&amp;gt; are said to be orthogonal to each other. For example, if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
-1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is clear that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0.&amp;lt;/math&amp;gt; This seems a rather innocuous definition, and yet the idea of orthogonality turns out to be extremely important in econometrics.&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; are thought of as points in &amp;lt;math&amp;gt;R^{2},&amp;lt;/math&amp;gt; and arrows are drawn from the origin to &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and to &amp;lt;math&amp;gt;\mathbf{y,}&amp;lt;/math&amp;gt; then the two arrows are perpendicular to each other - see Figure [orthy&amp;lt;sub&amp;gt;e&amp;lt;/sub&amp;gt;xample]. If &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; were defined as: &amp;lt;math&amp;gt;\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the position of the &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; vector and the corresponding arrow would change, but the perpendicularity property would still hold.&lt;br /&gt;
&lt;br /&gt;
Figure 1:&lt;br /&gt;
&lt;br /&gt;
[[File:orthy_example.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Matrix - vector products ===&lt;br /&gt;
&lt;br /&gt;
Since the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; has two rows, now denoted &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{1}^{T}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{2}^{T},&amp;lt;/math&amp;gt; there are two possible inner products with the vector:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]:\\&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x} &amp;amp; = &amp;amp; 42,\ \ \ \ \ \boldsymbol{\alpha}_{2}^{T}\mathbf{x}=33.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assembling the two inner product values into a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector defines the product of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; with the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x}\\&lt;br /&gt;
\boldsymbol{\alpha}_{2}^{T}\mathbf{x}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Focussing only on the part: &amp;lt;math&amp;gt;\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; one can see that each element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is obtained from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; argument.&lt;br /&gt;
&lt;br /&gt;
Sometimes this product is described as forming a &amp;#039;&amp;#039;linear combination &amp;#039;&amp;#039;of the columns of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; using the scalar elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=6\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]+3\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; More generally, if:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right],\ \ \ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
\lambda\\&lt;br /&gt;
\mu&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
A\mathbf{x} &amp;amp; = &amp;amp; \lambda\mathbf{a}+\mu\mathbf{b.}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The general version of these ideas for an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \mathbf{a}_{2} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right].&amp;lt;/math&amp;gt; is straightforward. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, then the vector &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is, by the &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
a_{11}x_{1}+\ldots+a_{1n}x_{n}\\&lt;br /&gt;
a_{21}x_{1}+\ldots+a_{2n}x_{n}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{m1}x_{1}+\ldots+a_{mn}x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{1j}x_{j}\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{2j}x_{j}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{mj}x_{j}&lt;br /&gt;
\end{array}\right],\label{eq:ab}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that the typical element, the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th, is &amp;lt;math&amp;gt;\sum\limits _{j=1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt; Equally, &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is the linear combination &amp;lt;math&amp;gt;\mathbf{a}_{1}x_{1}+\ldots+\mathbf{a}_{n}x_{n}&amp;lt;/math&amp;gt; of the columns of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - matrix products ==&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{a}_{1},\ldots,\mathbf{a}_{n},&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{b}_{1},\ldots,\mathbf{b}_{r}.&amp;lt;/math&amp;gt; Clearly, each product &amp;lt;math&amp;gt;A\mathbf{b}_{1},...,A\mathbf{b}_{r}&amp;lt;/math&amp;gt; exists, and is &amp;lt;math&amp;gt;m\times1.&amp;lt;/math&amp;gt; These products can be arranged as the columns of a matrix as &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]&amp;lt;/math&amp;gt; and this matrix is &amp;#039;&amp;#039;defined&amp;#039;&amp;#039; to be the product &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; of the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]=AB.&amp;lt;/math&amp;gt; By construction, this must be an &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix, since each column is &amp;lt;math&amp;gt;m\times1&amp;lt;/math&amp;gt; and there are &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; columns.&lt;br /&gt;
&lt;br /&gt;
This is not the usual presentation of the definition of the product of two matrices, which relies on the &amp;#039;&amp;#039;across and down rule&amp;#039;&amp;#039; mentioned earlier, and focusses on the elements of each matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt; Set:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \mathbf{b}_{2} &amp;amp; \ldots &amp;amp; \mathbf{b}_{r}\end{array}\right]\ \ \ \ \ \ \ \text{(by columns)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert b_{ik}\right\Vert ,\ \ \ \ \ i=1,...,n,k=1,...,r\ \ \ \ \ \ \ \text{(typical element)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \ \ \text{(the array)}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What does the typical element of the &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; look like? Start with the &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; which is &amp;lt;math&amp;gt;A\mathbf{b}_{k}.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element in &amp;lt;math&amp;gt;A\mathbf{b}_{k}&amp;lt;/math&amp;gt; is, from equation (2), the inner product of the elements of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\end{array}\right],&amp;lt;/math&amp;gt; with the elements of &amp;lt;math&amp;gt;\mathbf{b}_{k},&amp;lt;/math&amp;gt; so that the inner product is: &amp;lt;math&amp;gt;a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, the &amp;lt;math&amp;gt;ik&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;c_{ik}=a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt; We can see this arising from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; calculation by writing:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\label{eq:c_ab}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1k} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2k} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nk} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert \sum_{j=1}^{n}a_{ij}b_{jk}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These ideas are simple, but a little tedious. Numerical examples are equally tedious! As an example, using: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; we can find the matrix &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; such that&lt;br /&gt;
&lt;br /&gt;
# the first column of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; adds together the columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the second column is the difference of the first and second columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the third column is &amp;lt;math&amp;gt;2\times&amp;lt;/math&amp;gt; the first column of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the fourth column is zero.&lt;br /&gt;
&lt;br /&gt;
It is easy to check that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cccc}&lt;br /&gt;
8 &amp;amp; 4 &amp;amp; 12 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; -2 &amp;amp; 6 &amp;amp; 0&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Arithmetic calculations of matrix products almost always use the elementwise across and down formula. However, there are many situations in econometrics where algebraic rather than arithmetic arguments are required. In these cases, the viewpoint of matrix multiplication as linear combinations of columns is much more powerful.&lt;br /&gt;
&lt;br /&gt;
Clearly one can give many more examples of different dimensions and complexities - but the same basic rules apply. To multiply two matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; together, the number of columns in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; must match the number of rows in &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; - this is &amp;#039;&amp;#039;conformability&amp;#039;&amp;#039; in action again. The resulting product will have number of rows equal to the number in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and number of columns equal to the number in &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this conformability rule does not hold, then the product of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is not defined.&lt;br /&gt;
&lt;br /&gt;
== Matlab ==&lt;br /&gt;
&lt;br /&gt;
One should also say that as the dimensions of the matrices increases, so the tediousness of the calculations increase. The solution to this for numerical calculation is to appeal to the computer. Programs like Matlab and Excel (and a number of others, some of them free) resolve this difficulty easily.&lt;br /&gt;
&lt;br /&gt;
In Matlab, symbols for row or column vectors do not need any particular differentiation: they are distinguished by how they are defined. For example, the following Matlab commands define &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;as a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; as a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector, then display the contents of these variables, and do a calculation:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec = [1 2 3 4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec = [1;2;3;4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec&lt;br /&gt;
&lt;br /&gt;
rowvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec&lt;br /&gt;
&lt;br /&gt;
colvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec*colvec&lt;br /&gt;
&lt;br /&gt;
ans =&lt;br /&gt;
&lt;br /&gt;
30 &lt;br /&gt;
&lt;br /&gt;
So, the semi-colon indicates the end of a row in a matrix or vector; it can be replaced by a carriage return. Notice the difference in how a row vector and a column vector is defined. One can see that the product &amp;lt;code&amp;gt;rowvec*colvec&amp;lt;/code&amp;gt; is well defined, just because &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Matlab also allows elementwise multiplication of two vectors using the &amp;lt;math&amp;gt;\centerdot\ast&amp;lt;/math&amp;gt; operator: if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
y_{2}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{x}\centerdot\ast\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}y_{1}\\&lt;br /&gt;
x_{2}y_{2}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and one can see that the inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; can be obtained as the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; In Matlab, this would be obtained as: &amp;lt;math&amp;gt;\text{sum}\left(\mathbf{x}\centerdot\ast\mathbf{y}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the example above, this calculation fails since &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; sum(rowvec .* colvec) ??? &lt;br /&gt;
&lt;br /&gt;
Error using ==&amp;amp;gt; times Matrix dimensions must agree. &lt;br /&gt;
&lt;br /&gt;
For this to work, &amp;lt;code&amp;gt;rowvec&amp;lt;/code&amp;gt; would have to be transposed as &amp;lt;code&amp;gt;rowvec’&amp;lt;/code&amp;gt;, so that transposition in Matlab is very natural.&lt;br /&gt;
&lt;br /&gt;
Allowing for such difficulties, matrix multiplication in Matlab is very simple:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; A = [6 2; 3 5];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; B = [1 1 2 0;1 -1 0 0];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = A * B; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
 8 4 1&lt;br /&gt;
&lt;br /&gt;
 2 0 8 &lt;br /&gt;
&lt;br /&gt;
-2 6 0 &lt;br /&gt;
&lt;br /&gt;
Notice how the matrices are defined here through their rows. The &amp;lt;code&amp;gt;disp() &amp;lt;/code&amp;gt;command displays the contents of the object referred to.&lt;br /&gt;
&lt;br /&gt;
It is less natural in Matlab to define matrices by columns - a typical example of how mathematics and computing have conflicts of notation. However, once columns &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}&amp;lt;/math&amp;gt; have been defined, the concatenation operation &amp;lt;math&amp;gt;\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]&amp;lt;/math&amp;gt; collects the columns into a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; a = [6;2]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; b = [3;5]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = [a b]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
6 3 &lt;br /&gt;
&lt;br /&gt;
2 5 &lt;br /&gt;
&lt;br /&gt;
Notice that the &amp;lt;code&amp;gt;disp(C)&amp;lt;/code&amp;gt; command does not label the result that is printed out. Simply typing &amp;lt;code&amp;gt;C&amp;lt;/code&amp;gt; would preface the output by &amp;lt;code&amp;gt;C =&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Pre and Post Multiplication ==&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; as above, say that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;pre-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; and that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;post-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This distinction between &amp;#039;&amp;#039;pre &amp;#039;&amp;#039;and &amp;#039;&amp;#039;post &amp;#039;&amp;#039;multiplication is important, in the following sense. Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are matrices such that the products &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined. If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; rows for &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; to be defined. For &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; to be defined, &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; columns to match the &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when both products are defined, there is no reason for the two products coincide. The first thing to notice is that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;m\times m,&amp;lt;/math&amp;gt; matrix, whilst &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; matrix. Different sized matrices cannot be equal. To illustrate, use the matrices: &amp;lt;math&amp;gt;B_{2}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right],\ \ \ C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]:&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B_{2}C &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrr}&lt;br /&gt;
27 &amp;amp; -3 &amp;amp; -15\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-15 &amp;amp; -1 &amp;amp; 8&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
CB_{2} &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
49 &amp;amp; -11\\&lt;br /&gt;
31 &amp;amp; 15&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; matrices, the products can differ: for example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
8 &amp;amp; 4\\&lt;br /&gt;
8 &amp;amp; -2&lt;br /&gt;
\end{array}\right],\ \ \ \ \ BA=\left[\begin{array}{cc}&lt;br /&gt;
9 &amp;amp; 7\\&lt;br /&gt;
3 &amp;amp; -3&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In cases where &amp;lt;math&amp;gt;AB=BA,&amp;lt;/math&amp;gt; the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are said to &amp;#039;&amp;#039;commute&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Transposition ==&lt;br /&gt;
&lt;br /&gt;
A column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; can be converted to a row vector &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by transposition: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ \mathbf{x}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
x_{1} &amp;amp; \ldots &amp;amp; x_{n}\end{array}\right].&amp;lt;/math&amp;gt; Transposing &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;\left(\mathbf{x}^{T}\right)^{T}&amp;lt;/math&amp;gt; reproduces the original vector &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; How do these ideas carry over to matrices?&lt;br /&gt;
&lt;br /&gt;
If the &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right],&amp;lt;/math&amp;gt; the transpose of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; is defined as the matrix whose &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; are &amp;lt;math&amp;gt;\mathbf{a}_{i}^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{c}&lt;br /&gt;
\mathbf{a}_{1}^{T}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mathbf{a}_{n}^{T}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; In terms of elements, if: &amp;lt;math&amp;gt;\mathbf{a}_{i}=\left[\begin{array}{c}&lt;br /&gt;
a_{1i}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{ni}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ A^{T}=\left[\begin{array}{rrrrr}&lt;br /&gt;
a_{11} &amp;amp; \ldots &amp;amp; a_{i1} &amp;amp; \ldots &amp;amp; a_{m1}\\&lt;br /&gt;
a_{12} &amp;amp; \ldots &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{m2}\\&lt;br /&gt;
\vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{1n} &amp;amp; \ldots &amp;amp; a_{in} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; One can see that the first column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; has now become the first row of &amp;lt;math&amp;gt;A^{T}.&amp;lt;/math&amp;gt; Notice too that &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times m&amp;lt;/math&amp;gt; matrix if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix.&lt;br /&gt;
&lt;br /&gt;
Transposing &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; takes the first column of &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; and writes it as a row, which coincides with the first row of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; The same argument applies to the other columns of &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\left(A^{T}\right)^{T}=A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== The product rule for transposition ===&lt;br /&gt;
&lt;br /&gt;
This states that if &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;C^{T}=B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How to see this? Consider the following example: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; b_{13} &amp;amp; b_{14}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; b_{23} &amp;amp; b_{24}\\&lt;br /&gt;
b_{31} &amp;amp; b_{32} &amp;amp; b_{33} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;c_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=\sum_{k=1}^{3}a_{2k}b_{k3}.\label{eq:c23}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that: &amp;lt;math&amp;gt;B^{T}A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
b_{11} &amp;amp; b_{21} &amp;amp; b_{31}\\&lt;br /&gt;
b_{12} &amp;amp; b_{22} &amp;amp; b_{32}\\&lt;br /&gt;
b_{13} &amp;amp; b_{23} &amp;amp; b_{33}\\&lt;br /&gt;
b_{14} &amp;amp; b_{24} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
a_{11} &amp;amp; a_{21}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that the &amp;lt;math&amp;gt;\left(3,2\right)&amp;lt;/math&amp;gt; element of this product is actually &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;b_{13}a_{21}+b_{23}a_{22}+b_{33}a_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=c_{23}.&amp;lt;/math&amp;gt; In summation notation, we see that from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;c_{23}=\sum_{k=1}^{3}b_{k3}a_{2k},&amp;lt;/math&amp;gt; where the position of the index of summation is due to the transposition. So, in summation notation, the calculation of &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; equals that from equation (6).&lt;br /&gt;
&lt;br /&gt;
More generally, the &amp;lt;math&amp;gt;\left(i,j\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\sum_{k=1}^{3}a_{ik}b_{kj}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;\left(j,i\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt; But this means that &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; must be the transpose of &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; since the elements in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; are being written in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This &amp;#039;&amp;#039;Product Rule for Transposition&amp;#039;&amp;#039; can be applied again to find the transpose &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;C^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}=\left(B^{T}A^{T}\right)^{T}=\left(A^{T}\right)^{T}\left(B^{T}\right)^{T}=AB=C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Special Types of Matrix =&lt;br /&gt;
&lt;br /&gt;
== The zero matrix ==&lt;br /&gt;
&lt;br /&gt;
The most obvious special type of matrix is one whose elements are all zeros. In typical element notation, the zero matrix is: &amp;lt;math&amp;gt;0=\left\Vert 0\right\Vert .&amp;lt;/math&amp;gt; Since there is no indexing on the elements, it is not obvious what the dimension of this matrix is, Sometimes one writes &amp;lt;math&amp;gt;0_{mn}&amp;lt;/math&amp;gt; to indicate a zero matrix of dimension &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The same ideas apply to vectors whose elements are all zero.&lt;br /&gt;
&lt;br /&gt;
The effect of the zero matrix in any product that is defined is simple: &amp;lt;math&amp;gt;0A=0,\ \ \ \ \ B0=0.&amp;lt;/math&amp;gt; This is easy to check using the across and down rule.&lt;br /&gt;
&lt;br /&gt;
== The identity or unit matrix ==&lt;br /&gt;
&lt;br /&gt;
Vectors of the form:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }2\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }3\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ldots,\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }n\ \text{dimensions}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
are called coordinate vectors. They are often given a characteristic notation, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; dimensions. When arranged as columns of a matrix in the natural order, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; a matrix with a characteristic pattern elements emerges, with a special notation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{2}\\&lt;br /&gt;
\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \mathbf{e}_{3}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{3}\\&lt;br /&gt;
\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \ldots &amp;amp; \mathbf{e}_{n}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;diagonal&amp;#039;&amp;#039; of this matrix is where the 1 elements are located, and every other element is zero.&lt;br /&gt;
&lt;br /&gt;
Consider the effect of &amp;lt;math&amp;gt;I_{2}&amp;lt;/math&amp;gt; on the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; by both pre and post multiplication:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
I_{2}A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\\&lt;br /&gt;
AI_{2} &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
as is easily checked by the across and down rule.&lt;br /&gt;
&lt;br /&gt;
Because any matrix is left unchanged by pre or post multiplication by an appropriately dimensioned &amp;lt;math&amp;gt;I_{n},&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is called an &amp;#039;&amp;#039;identity matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Sometimes it is called a &amp;#039;&amp;#039;unit matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Notice that &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is necessarily a square matrix.&lt;br /&gt;
&lt;br /&gt;
== Diagonal matrices ==&lt;br /&gt;
&lt;br /&gt;
The identity matrix is an example of a diagonal matrix, a matrix whose elements are all zero except for those on the diagonal. Usually diagonal matrices are taken to be square, for example: &amp;lt;math&amp;gt;D=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; They also produce characteristic effects when pre or post multiplying another matrix.&lt;br /&gt;
&lt;br /&gt;
Consider the diagonal matrix: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and the products &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; as defined in the previous section:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; -4\\&lt;br /&gt;
6 &amp;amp; -10&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
BA &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; 4\\&lt;br /&gt;
-6 &amp;amp; -10&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Comparing the results, we can deduce that post multiplication by a diagonal matrix multiplies each column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by the corresponding diagonal element, whereas pre multiplication multiplies each row by the corresponding diagonal element.&lt;br /&gt;
&lt;br /&gt;
== Symmetric matrices ==&lt;br /&gt;
&lt;br /&gt;
Symmetric matrices are matrices having the property that &amp;lt;math&amp;gt;A=A^{T}.&amp;lt;/math&amp;gt; Notice that such matrices must be square, since if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and to have equality of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; they must have the same dimension, so that &amp;lt;math&amp;gt;m=n&amp;lt;/math&amp;gt; is required.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; symmetric matrix, with typical element &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{21} &amp;amp; a_{31}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22} &amp;amp; a_{32}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equality of matrices is defined as equality of all elements. This is fine on the diagonals, since &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; have the same diagonal elements. For the off diagonal elements, we end up with the requirements: &amp;lt;math&amp;gt;a_{12}=a_{21},\ \ \ a_{13}=a_{31},\ \ \ a_{23}=a_{32}&amp;lt;/math&amp;gt; or more generally: &amp;lt;math&amp;gt;a_{ij}=a_{ji}\ \ \ \ \ \text{for}\ i\neq j.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The effect of this conclusion is that in a symmetric matrix, the ’triangle’ of above diagonal elements coincides with the triangle of below diagonal elements. It is as if the upper triangle is folded over the diagonal to become the lower triangle.&lt;br /&gt;
&lt;br /&gt;
A simple example is: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 2\\&lt;br /&gt;
2 &amp;amp; 1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; A more complicated example uses the &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and calculates the &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C^{T}C &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
45 &amp;amp; 27 &amp;amp; -21\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-21 &amp;amp; -11 &amp;amp; 10&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is clearly symmetric.&lt;br /&gt;
&lt;br /&gt;
This illustrates the general proposition that if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix, the product &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is a symmetric &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix. Proof? Compute the transpose of &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; using the product rule for transposition: &amp;lt;math&amp;gt;\left(A^{T}A\right)^{T}=A^{T}\left(A^{T}\right)^{T}=A^{T}A.&amp;lt;/math&amp;gt; Since &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is equal to its transpose, it must be a symmetric matrix. Such symmetric matrices appear frequently in econometrics.&lt;br /&gt;
&lt;br /&gt;
It should be clear that diagonal matrices are symmetric, since all their off-diagonal elements are equal (zero), and thence the identity matrix &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is also symmetric.&lt;br /&gt;
&lt;br /&gt;
== The outer product ==&lt;br /&gt;
&lt;br /&gt;
The inner product of two &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}&amp;lt;/math&amp;gt;, is automatically a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; quantity, a scalar, although it can be interpreted as a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, a matrix with a single element.&lt;br /&gt;
&lt;br /&gt;
Suppose one considered the product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{x}^{T}.&amp;lt;/math&amp;gt; Is this defined? If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; then the product &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times r.&amp;lt;/math&amp;gt; Applying this logic to &amp;lt;math&amp;gt;\mathbf{xx}^{T},&amp;lt;/math&amp;gt; this is &amp;lt;math&amp;gt;\left(n\times1\right)\left(1\times n\right),&amp;lt;/math&amp;gt; so the resulting product &amp;#039;&amp;#039;is &amp;#039;&amp;#039;defined, and is an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt;&amp;#039;&amp;#039; matrix&amp;#039;&amp;#039; - the &amp;#039;&amp;#039;outer product&amp;#039;&amp;#039; of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; the word ’outer’ being used to distinguish from the inner product.&lt;br /&gt;
&lt;br /&gt;
How does the across and down rule work here? Suppose that: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Then: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right].&amp;lt;/math&amp;gt; Here, there is &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in row one of the ’matrix’ &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in column one of the matrix &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; so the across and down rule still works - it is just that there is only one product per row and column combination. So: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{cc}&lt;br /&gt;
36 &amp;amp; 18\\&lt;br /&gt;
18 &amp;amp; 9&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and it is obvious from this that &amp;lt;math&amp;gt;\mathbf{xx}^{T}&amp;lt;/math&amp;gt; is a symmetric matrix.&lt;br /&gt;
&lt;br /&gt;
One can see that this outer product need not be restricted to vectors of the same dimension. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times1,&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{xy}^{T}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
y_{1} &amp;amp; \ldots &amp;amp; y_{m}\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
x_{1}y_{1} &amp;amp; x_{1}y_{2} &amp;amp; \ldots &amp;amp; x_{1}y_{m}\\&lt;br /&gt;
x_{2}y_{1} &amp;amp; x_{2}y_{2} &amp;amp; \ldots &amp;amp; x_{2}y_{m}\\&lt;br /&gt;
\\&lt;br /&gt;
x_{n}y_{1} &amp;amp; x_{n}y_{2} &amp;amp; \ldots &amp;amp; x_{n}y_{m}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;\mathbf{xy}^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and consists of rows which are &amp;lt;math&amp;gt;\mathbf{y}^{T}&amp;lt;/math&amp;gt; multiplied by an element of the &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Another interesting and useful example involves a vector with every element equal to &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Sometimes this is written as &amp;lt;math&amp;gt;\mathbf{1}_{n}&amp;lt;/math&amp;gt; to indicate an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, and is called the &amp;#039;&amp;#039;sum vector&amp;#039;&amp;#039;. Why? Consider the impact of &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; on the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; used above: &amp;lt;math&amp;gt;\mathbf{1}_{2}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=9,&amp;lt;/math&amp;gt; i.e. an inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with the sum vector is the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; Dividing through by the number of elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; produces the average of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; - i.e. the ’sample mean’ of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The outer product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; is also interesting:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{1}_{2}\mathbf{x}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
6 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x1}_{2}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 6\\&lt;br /&gt;
3 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
showing that pre multiplication of an &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as rows of the product, whilst post multiplication of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}^{T}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as the columns of the product.&lt;br /&gt;
&lt;br /&gt;
Finally: &amp;lt;math&amp;gt;\mathbf{1}_{n}\mathbf{1}_{n}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix with every element equal to &amp;lt;math&amp;gt;1.&amp;lt;/math&amp;gt; This type of matrix also appears in econometrics!&lt;br /&gt;
&lt;br /&gt;
== Triangular matrices ==&lt;br /&gt;
&lt;br /&gt;
A square &amp;#039;&amp;#039;lower triangular &amp;#039;&amp;#039;matrix has all elements above the main diagonal equal to zero, whilst a square &amp;#039;&amp;#039;upper triangular &amp;#039;&amp;#039;matrix has all elements below the main diagonal equal to zero. A simple example of a lower triangular matrix is: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; 0\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Clearly, for this matrix, &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an upper triangular matrix.&lt;br /&gt;
&lt;br /&gt;
One can adapt the definition to rectangular matrices: for example, if two arbitrary rows are added to &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; so that it becomes &amp;lt;math&amp;gt;5\times3,&amp;lt;/math&amp;gt; it would still be considered lower triangular. Equally, if, for example, the third column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; above is removed, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is still considered lower triangular.&lt;br /&gt;
&lt;br /&gt;
Often, we use &amp;#039;&amp;#039;unit &amp;#039;&amp;#039;triangular matrices, where the diagonal elements are all equal to &amp;lt;math&amp;gt;1:&amp;lt;/math&amp;gt; e.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 1\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right].\label{eq:lt_matrix}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Partitioned matrices ==&lt;br /&gt;
&lt;br /&gt;
Sometimes, especially with big matrices, it is useful to organise the elements of the matrix into components which are themselves matrices, for example: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; 3 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 7 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 6 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; Here it would be reasonable to write: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
B_{11} &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; B_{22}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;B_{ii},i=1,2,&amp;lt;/math&amp;gt; represent &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrices. &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is an example of a &amp;#039;&amp;#039;partitioned matrix&amp;#039;&amp;#039;: that is, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; say: &amp;lt;math&amp;gt;A=\left\Vert a_{ij}\right\Vert ,&amp;lt;/math&amp;gt; where the elements of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; are organised into &amp;#039;&amp;#039;sub-matrices&amp;#039;&amp;#039;. An example might be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right],\label{eq:partition_a}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;#039;&amp;#039;sub - matrices&amp;#039;&amp;#039; in the first row block have &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; rows, and therefore &amp;lt;math&amp;gt;m-r&amp;lt;/math&amp;gt; rows in the second row block. The column blocks might be defined by (for example) 3 columns in the first column block, 4 in the second and &amp;lt;math&amp;gt;n-7&amp;lt;/math&amp;gt; in the third column block.&lt;br /&gt;
&lt;br /&gt;
Another simple example might be: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{1} &amp;amp; A_{2} &amp;amp; A_{3}\end{array}\right],\ \ \ \ \ \mathbf{x=}\left[\begin{array}{c}&lt;br /&gt;
\mathbf{x}_{1}\\&lt;br /&gt;
\mathbf{x}_{2}\\&lt;br /&gt;
\mathbf{x}_{3}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and therefore &amp;lt;math&amp;gt;A_{1},A_{2},A_{3}&amp;lt;/math&amp;gt; have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows, &amp;lt;math&amp;gt;A_{1}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{2}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{3}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; columns. The &amp;#039;&amp;#039;subvectors&amp;#039;&amp;#039; in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n_{1},n_{2}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; rows respectively, for the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; to exist.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;n_{1}+n_{2}+n_{3}=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\sum_{i=1}^{n}a_{ij}x_{j}&amp;lt;/math&amp;gt; but the summation can be broken up into the first &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=1}^{n_{1}}a_{ij}x_{j},&amp;lt;/math&amp;gt; the next &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=n_{1}+1}^{n_{1}+n_{2}}a_{ij}x_{j},&amp;lt;/math&amp;gt; and the next &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; terms; &amp;lt;math&amp;gt;\sum_{i=n_{1}+n_{2}+1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The point about the use of partitioned matrices is that the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; can be represented as: &amp;lt;math&amp;gt;A\mathbf{x}=A_{1}\mathbf{x}_{1}+A_{2}\mathbf{x}_{2}+A\mathbf{x}_{3}&amp;lt;/math&amp;gt; by applying the across and down rule to the submatrices and the subvectors, a much simpler representation than the use of summations.&lt;br /&gt;
&lt;br /&gt;
Each of the components is a conformable matrix-vector product: this is essential in any use of partitioned matrices to represent some matrix product. For example, using &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; from equation (8) and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;B=\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is easy to write:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
A_{11}B_{11}+A_{12}B_{21}+A_{13}B_{31}\\&lt;br /&gt;
A_{21}B_{11}+A_{22}B_{21}+A_{23}B_{31}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
But, what are the row dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt; What are the possible column dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Matrices, vectors and econometrics =&lt;br /&gt;
&lt;br /&gt;
The data on weights and heights for 12 students in the data matrix: &amp;lt;math&amp;gt;D=\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; would seem to be ideally suited for fitting a two variable regression model: &amp;lt;math&amp;gt;y_{i}=\alpha+\beta x_{i}+u_{i},\;\;\;\;\; i=1,...,12.&amp;lt;/math&amp;gt; Here, the first column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the weight data, the data on the dependent variable &amp;lt;math&amp;gt;y_{i},&amp;lt;/math&amp;gt; and so should be labelled &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; The second column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the data on the explanatory variable height, in the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; say, so that: &amp;lt;math&amp;gt;D=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{y} &amp;amp; \mathbf{x}\end{array}\right].&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we define a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector with every element &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}_{12}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{u}&amp;lt;/math&amp;gt; to contain the error terms; &amp;lt;math&amp;gt;\mathbf{u}=\left[\begin{array}{c}&lt;br /&gt;
u_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
u_{12}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; the regression model can be written in terms of the three data vectors &amp;lt;math&amp;gt;\mathbf{y,1}_{12}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\mathbf{y}=\mathbf{1}_{12}\alpha+\mathbf{x}\beta+\mathbf{u.}&amp;lt;/math&amp;gt; To see this, think of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th elements of the vectors on the left and right hand sides.&lt;br /&gt;
&lt;br /&gt;
The standard next step is then to combine the data vectors for the explanatory variables into a matrix: &amp;lt;math&amp;gt;X=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{1}_{12} &amp;amp; \mathbf{x}\end{array}\right],&amp;lt;/math&amp;gt; and then define a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\boldsymbol{\delta}&amp;lt;/math&amp;gt; to contain the parameters &amp;lt;math&amp;gt;\alpha,\beta&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\boldsymbol{\delta}=\left[\begin{array}{r}&lt;br /&gt;
\alpha\\&lt;br /&gt;
\beta&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; to give the data matrix representation of the regression model as: &amp;lt;math&amp;gt;\mathbf{y}=X\boldsymbol{\delta}+\mathbf{u.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the purposes of developing the theory of regression, this is the most convenient form of the regression model. It can represent regression models with any number of explanatory variables, and thus any number of parameters. The obvious point is that a knowledge of vector and matrix operations is needed to use and understand this form.&lt;br /&gt;
&lt;br /&gt;
We shall see later that there are two particular matrix and vector quantities associated with a regression model. The first is the matrix &amp;lt;math&amp;gt;X^{T}X,&amp;lt;/math&amp;gt; and the second the vector &amp;lt;math&amp;gt;X^{T}\mathbf{y.}&amp;lt;/math&amp;gt; The following Matlab code snippet provides the numerical values of these quantities for the weight data:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; dset = load(’weights.mat’); &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xtx = dset.X’ * dset.X; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xty = dset.X’ * dset.y; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xtx) &lt;br /&gt;
&lt;br /&gt;
 12     802&lt;br /&gt;
&lt;br /&gt;
802   53792&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xty)&lt;br /&gt;
&lt;br /&gt;
  1850&lt;br /&gt;
&lt;br /&gt;
124528&lt;br /&gt;
&lt;br /&gt;
Hand calculation is of course possible, but not recommended.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=LNotes&amp;diff=3027</id>
		<title>LNotes</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=LNotes&amp;diff=3027"/>
				<updated>2013-09-10T14:11:24Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Matrices =&lt;br /&gt;
&lt;br /&gt;
In the PreSession Maths course, a matrix was defined as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;A matrix is a rectangular array of numbers enclosed in parentheses, conventionally denoted by a capital letter. The number of rows (say &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt;) and&lt;br /&gt;
&lt;br /&gt;
the number of columns (say &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;) determine the order of the matrix (&amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\times&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;).&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
Two examples were given:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
P &amp;amp; =\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 3 &amp;amp; 4\\&lt;br /&gt;
3 &amp;amp; 1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ Q=\left[\begin{array}{rr}&lt;br /&gt;
2 &amp;amp; 3\\&lt;br /&gt;
4 &amp;amp; 3\\&lt;br /&gt;
1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
matrices of dimensions &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;3\times2&amp;lt;/math&amp;gt; respectively.&lt;br /&gt;
&lt;br /&gt;
Why study matrices for econometrics? Basically because a data set of several variables, e.g. on the weights and heights of 12 students, can be thought of as a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
D &amp;amp; =\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The properties of matrices can then be used to facilitate answering all the usual questions of econometrics - list not given here!&lt;br /&gt;
&lt;br /&gt;
Calculations with matrices with explicit numerical elements, as in the examples above is called matrix &amp;#039;&amp;#039;arithmetic&amp;#039;&amp;#039;. Matrix &amp;#039;&amp;#039;algebra&amp;#039;&amp;#039; is the algebra of matrices where the elements are not made explicit: this is what is really required for econometrics, as we shall see.&lt;br /&gt;
&lt;br /&gt;
As an example of this, a &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix might be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{ccc}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and would equal &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; above if the collection of &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; were given appropriate numerical values.&lt;br /&gt;
&lt;br /&gt;
A general &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is also a &amp;#039;&amp;#039;typical element &amp;#039;&amp;#039;notation for matrices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left\Vert a_{ij}\right\Vert ,\ \ \ \ \ i=1,...,m,j=1,...,n,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; is the element at the intersection of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row and &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th column in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;m\neq n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;#039;&amp;#039;rectangular &amp;#039;&amp;#039;matrix; when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a square matrix, having the same number of rows or columns.&lt;br /&gt;
&lt;br /&gt;
== Rows, columns and vectors ==&lt;br /&gt;
&lt;br /&gt;
Clearly, there is no reason why &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; cannot equal 1: so, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix with &amp;lt;math&amp;gt;n=1,&amp;lt;/math&amp;gt; i.e. with one column, is usually called a column vector. Similarly, a matrix with one row is a row vector.&lt;br /&gt;
&lt;br /&gt;
There are a lot of advantages to thinking of matrices as collections of row or column vectors, as we shall see. As an example, define the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; column vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\mathbf{,\ \ \ b}=\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and arrange as the columns of the &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\label{eq:axy}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, a column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; elements can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What happens when both &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; are equal to &amp;lt;math&amp;gt;1?&amp;lt;/math&amp;gt; Then, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, but it is also considered to be a real number, or &amp;#039;&amp;#039;scalar&amp;#039;&amp;#039; in the language of linear algebra:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[a_{11}\right]=a_{11}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is perhaps a little odd, but turns out to be a useful convention in a number of situations.&lt;br /&gt;
&lt;br /&gt;
== Transposition of vectors ==&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; in equation (1) can be seen as elements of column vectors, say:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],\ \ \ \boldsymbol{d}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This representation of row vectors as column vectors is a bit clumsy, so some transformation which converts a column vector into a row vector, and vice versa would be useful. The process of converting a column vector &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; into a row vector is called &amp;#039;&amp;#039;transposition, &amp;#039;&amp;#039;and the transposed version of &amp;lt;math&amp;gt;\mathbf{c}&amp;lt;/math&amp;gt; is denoted:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c}^{T} &amp;amp; =\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; superscript denoting transposition. In practice, a prime, &amp;lt;math&amp;gt;^{\prime},&amp;lt;/math&amp;gt; is used instead of &amp;lt;math&amp;gt;^{T}.&amp;lt;/math&amp;gt; However, whilst the prime is much simpler to write than the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; sign, it is also much easier to lose track of in writing out long or complicated expressions. So, it is best initially to use &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; to denote transposition rather than the prime &amp;lt;math&amp;gt;^{\prime}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can then be written via its rows as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
\mathbf{c}^{T}\\&lt;br /&gt;
\boldsymbol{d}^{T}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The same ideas can be applied to the matrices &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Q.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Operations with matrices =&lt;br /&gt;
&lt;br /&gt;
== Addition, subtraction and scalar multiplication ==&lt;br /&gt;
&lt;br /&gt;
For vectors, addition and subtraction are defined only for vectors of the same dimensions. If:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
y_{n}&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x+y} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}+y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}+y_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{x-y}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}-y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}-y_{n}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clearly, the addition or subtraction operation is &amp;#039;&amp;#039;elementwise. &amp;#039;&amp;#039;If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; have different dimensions, there will be some elements left over once all the elements of the smaller dimensioned vector have been used up.&lt;br /&gt;
&lt;br /&gt;
Another operation is &amp;#039;&amp;#039;scalar multiplication&amp;#039;&amp;#039;: if &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; is a real number or scalar, the product &amp;lt;math&amp;gt;\lambda\mathbf{x}&amp;lt;/math&amp;gt; is defined as: &amp;lt;math&amp;gt;\lambda\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that every element of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is multiplied by the same scalar &amp;lt;math&amp;gt;\lambda.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The two types of operation can be combined into the &amp;#039;&amp;#039;linear combination&amp;#039;&amp;#039; of vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right]+\left[\begin{array}{c}&lt;br /&gt;
\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mu y_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}+\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}+\mu y_{n}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equally, one can define the linear combination of vectors &amp;lt;math&amp;gt;\mathbf{x,y,}\ldots,\mathbf{z}&amp;lt;/math&amp;gt; by scalars &amp;lt;math&amp;gt;\lambda,\mu,\ldots,\nu&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}+\ldots+\nu\mathbf{z}&amp;lt;/math&amp;gt; with typical element: &amp;lt;math&amp;gt;\lambda x_{i}+\mu y_{i}+\ldots+\nu z_{i},&amp;lt;/math&amp;gt; provided that all the vectors have the same dimension.&lt;br /&gt;
&lt;br /&gt;
For matrices, these ideas carry over immediately: apply to each column of the matrices involved. For example, if &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{b}_{n}\end{array}\right],&amp;lt;/math&amp;gt; both &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; then addition and subtraction are defined elementwise, as for vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A+B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}+\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}+\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}+b_{ij}\right\Vert ,\\&lt;br /&gt;
A-B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}-\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}-\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}-b_{ij}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scalar multiplication of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; involves multiplying every column vector of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda,&amp;lt;/math&amp;gt; and therefore multiplying every element of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda A=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}\right\Vert .&amp;lt;/math&amp;gt; With the same idea for &amp;lt;math&amp;gt;B,&amp;lt;/math&amp;gt; the linear combination of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mu&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\lambda A+\mu B=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1}+\mu\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}+\mu\mathbf{b}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}+\mu b_{ij}\right\Vert .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, consider the matrices: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\lambda=1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mu=-2:&amp;lt;/math&amp;gt; then:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\lambda A+\mu B &amp;amp; = &amp;amp; A-2B\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
4 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; 7&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - vector products ==&lt;br /&gt;
&lt;br /&gt;
=== Inner product ===&lt;br /&gt;
&lt;br /&gt;
The simplest form of a matrix vector product is the case where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; consists of one row, so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;1\times n&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\mathbf{a}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right].&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the product &amp;lt;math&amp;gt;A\mathbf{x}=\mathbf{a}^{T}\mathbf{x}&amp;lt;/math&amp;gt; is called the &amp;#039;&amp;#039;inner product&amp;#039;&amp;#039; and is defined as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a}^{T}\mathbf{x} &amp;amp; =a_{1}x_{1}+\ldots+a_{n}x_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that the definition amounts to multiplying corresponding elements in &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and adding up the resultant products. Writing: &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x=}\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=a_{1}x_{1}+\ldots+a_{n}x_{n}&amp;lt;/math&amp;gt; motivates the familiar description of the &amp;#039;&amp;#039;across and down rule &amp;#039;&amp;#039;for this product: &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; is the ’multiply corresponding elements’ part of the definition.&lt;br /&gt;
&lt;br /&gt;
Notice that the result of the inner product is a real number, for example: &amp;lt;math&amp;gt;\mathbf{c}^{T}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{c}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=36+6=42.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, in the product &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x,}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have the same number of elements, &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; say, for the product to be defined. If &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; had different numbers of elements, there would be some elements of &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; left over or not used in the product: e.g.: &amp;lt;math&amp;gt;\mathbf{b}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{x=}\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; When the inner product of two vectors is defined, the vectors are said to be &amp;#039;&amp;#039;conformable&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Orthogonality ==&lt;br /&gt;
&lt;br /&gt;
Two vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; with the property that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0&amp;lt;/math&amp;gt; are said to be orthogonal to each other. For example, if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
-1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is clear that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0.&amp;lt;/math&amp;gt; This seems a rather innocuous definition, and yet the idea of orthogonality turns out to be extremely important in econometrics.&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; are thought of as points in &amp;lt;math&amp;gt;R^{2},&amp;lt;/math&amp;gt; and arrows are drawn from the origin to &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and to &amp;lt;math&amp;gt;\mathbf{y,}&amp;lt;/math&amp;gt; then the two arrows are perpendicular to each other - see Figure [orthy&amp;lt;sub&amp;gt;e&amp;lt;/sub&amp;gt;xample]. If &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; were defined as: &amp;lt;math&amp;gt;\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the position of the &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; vector and the corresponding arrow would change, but the perpendicularity property would still hold.&lt;br /&gt;
&lt;br /&gt;
Figure 1:&lt;br /&gt;
&lt;br /&gt;
[[File:orthy_example.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Matrix - vector products ===&lt;br /&gt;
&lt;br /&gt;
Since the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; has two rows, now denoted &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{1}^{T}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{2}^{T},&amp;lt;/math&amp;gt; there are two possible inner products with the vector:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]:\\&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x} &amp;amp; = &amp;amp; 42,\ \ \ \ \ \boldsymbol{\alpha}_{2}^{T}\mathbf{x}=33.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assembling the two inner product values into a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector defines the product of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; with the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x}\\&lt;br /&gt;
\boldsymbol{\alpha}_{2}^{T}\mathbf{x}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Focussing only on the part: &amp;lt;math&amp;gt;\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; one can see that each element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is obtained from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; argument.&lt;br /&gt;
&lt;br /&gt;
Sometimes this product is described as forming a &amp;#039;&amp;#039;linear combination &amp;#039;&amp;#039;of the columns of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; using the scalar elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=6\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]+3\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; More generally, if:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right],\ \ \ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
\lambda\\&lt;br /&gt;
\mu&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
A\mathbf{x} &amp;amp; = &amp;amp; \lambda\mathbf{a}+\mu\mathbf{b.}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The general version of these ideas for an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \mathbf{a}_{2} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right].&amp;lt;/math&amp;gt; is straightforward. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, then the vector &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is, by the &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
a_{11}x_{1}+\ldots+a_{1n}x_{n}\\&lt;br /&gt;
a_{21}x_{1}+\ldots+a_{2n}x_{n}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{m1}x_{1}+\ldots+a_{mn}x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{1j}x_{j}\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{2j}x_{j}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{mj}x_{j}&lt;br /&gt;
\end{array}\right],\label{eq:ab}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that the typical element, the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th, is &amp;lt;math&amp;gt;\sum\limits _{j=1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt; Equally, &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is the linear combination &amp;lt;math&amp;gt;\mathbf{a}_{1}x_{1}+\ldots+\mathbf{a}_{n}x_{n}&amp;lt;/math&amp;gt; of the columns of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - matrix products ==&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{a}_{1},\ldots,\mathbf{a}_{n},&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{b}_{1},\ldots,\mathbf{b}_{r}.&amp;lt;/math&amp;gt; Clearly, each product &amp;lt;math&amp;gt;A\mathbf{b}_{1},...,A\mathbf{b}_{r}&amp;lt;/math&amp;gt; exists, and is &amp;lt;math&amp;gt;m\times1.&amp;lt;/math&amp;gt; These products can be arranged as the columns of a matrix as &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]&amp;lt;/math&amp;gt; and this matrix is &amp;#039;&amp;#039;defined&amp;#039;&amp;#039; to be the product &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; of the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]=AB.&amp;lt;/math&amp;gt; By construction, this must be an &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix, since each column is &amp;lt;math&amp;gt;m\times1&amp;lt;/math&amp;gt; and there are &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; columns.&lt;br /&gt;
&lt;br /&gt;
This is not the usual presentation of the definition of the product of two matrices, which relies on the &amp;#039;&amp;#039;across and down rule&amp;#039;&amp;#039; mentioned earlier, and focusses on the elements of each matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt; Set:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \mathbf{b}_{2} &amp;amp; \ldots &amp;amp; \mathbf{b}_{r}\end{array}\right]\text{\ \ \ \ \ \ \ (by columns)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert b_{ik}\right\Vert ,\ \ \ \ \ i=1,...,n,k=1,...,r\text{ \ \ \ \ \ \ (typical element)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\text{\ \ \ \ \ \ \ (the array)}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What does the typical element of the &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; look like? Start with the &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; which is &amp;lt;math&amp;gt;A\mathbf{b}_{k}.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element in &amp;lt;math&amp;gt;A\mathbf{b}_{k}&amp;lt;/math&amp;gt; is, from equation (2), the inner product of the elements of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\end{array}\right],&amp;lt;/math&amp;gt; with the elements of &amp;lt;math&amp;gt;\mathbf{b}_{k},&amp;lt;/math&amp;gt; so that the inner product is: &amp;lt;math&amp;gt;a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, the &amp;lt;math&amp;gt;ik&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;c_{ik}=a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt; We can see this arising from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; calculation by writing:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\label{eq:c_ab}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1k} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2k} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nk} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert \sum_{j=1}^{n}a_{ij}b_{jk}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These ideas are simple, but a little tedious. Numerical examples are equally tedious! As an example, using: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; we can find the matrix &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; such that&lt;br /&gt;
&lt;br /&gt;
# the first column of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; adds together the columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the second column is the difference of the first and second columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the third column is &amp;lt;math&amp;gt;2\times&amp;lt;/math&amp;gt; the first column of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the fourth column is zero.&lt;br /&gt;
&lt;br /&gt;
It is easy to check that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cccc}&lt;br /&gt;
8 &amp;amp; 4 &amp;amp; 12 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; -2 &amp;amp; 6 &amp;amp; 0&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Arithmetic calculations of matrix products almost always use the elementwise across and down formula. However, there are many situations in econometrics where algebraic rather than arithmetic arguments are required. In these cases, the viewpoint of matrix multiplication as linear combinations of columns is much more powerful.&lt;br /&gt;
&lt;br /&gt;
Clearly one can give many more examples of different dimensions and complexities - but the same basic rules apply. To multiply two matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; together, the number of columns in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; must match the number of rows in &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; - this is &amp;#039;&amp;#039;conformability&amp;#039;&amp;#039; in action again. The resulting product will have number of rows equal to the number in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and number of columns equal to the number in &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this conformability rule does not hold, then the product of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is not defined.&lt;br /&gt;
&lt;br /&gt;
== Matlab ==&lt;br /&gt;
&lt;br /&gt;
One should also say that as the dimensions of the matrices increases, so the tediousness of the calculations increase. The solution to this for numerical calculation is to appeal to the computer. Programs like Matlab and Excel (and a number of others, some of them free) resolve this difficulty easily.&lt;br /&gt;
&lt;br /&gt;
In Matlab, symbols for row or column vectors do not need any particular differentiation: they are distinguished by how they are defined. For example, the following Matlab commands define &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;as a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; as a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector, then display the contents of these variables, and do a calculation:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec = [1 2 3 4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec = [1;2;3;4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec&lt;br /&gt;
&lt;br /&gt;
rowvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec&lt;br /&gt;
&lt;br /&gt;
colvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec*colvec&lt;br /&gt;
&lt;br /&gt;
ans =&lt;br /&gt;
&lt;br /&gt;
30 &lt;br /&gt;
&lt;br /&gt;
So, the semi-colon indicates the end of a row in a matrix or vector; it can be replaced by a carriage return. Notice the difference in how a row vector and a column vector is defined. One can see that the product &amp;lt;code&amp;gt;rowvec*colvec&amp;lt;/code&amp;gt; is well defined, just because &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Matlab also allows elementwise multiplication of two vectors using the &amp;lt;math&amp;gt;\centerdot\ast&amp;lt;/math&amp;gt; operator: if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
y_{2}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{x}\centerdot\ast\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}y_{1}\\&lt;br /&gt;
x_{2}y_{2}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and one can see that the inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; can be obtained as the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; In Matlab, this would be obtained as: &amp;lt;math&amp;gt;\text{sum}\left(\mathbf{x}\centerdot\ast\mathbf{y}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the example above, this calculation fails since &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; sum(rowvec .* colvec) ??? &lt;br /&gt;
&lt;br /&gt;
Error using ==&amp;amp;gt; times Matrix dimensions must agree. &lt;br /&gt;
&lt;br /&gt;
For this to work, &amp;lt;code&amp;gt;rowvec&amp;lt;/code&amp;gt; would have to be transposed as &amp;lt;code&amp;gt;rowvec’&amp;lt;/code&amp;gt;, so that transposition in Matlab is very natural.&lt;br /&gt;
&lt;br /&gt;
Allowing for such difficulties, matrix multiplication in Matlab is very simple:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; A = [6 2; 3 5];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; B = [1 1 2 0;1 -1 0 0];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = A * B; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
 8 4 1&lt;br /&gt;
&lt;br /&gt;
 2 0 8 &lt;br /&gt;
&lt;br /&gt;
-2 6 0 &lt;br /&gt;
&lt;br /&gt;
Notice how the matrices are defined here through their rows. The &amp;lt;code&amp;gt;disp() &amp;lt;/code&amp;gt;command displays the contents of the object referred to.&lt;br /&gt;
&lt;br /&gt;
It is less natural in Matlab to define matrices by columns - a typical example of how mathematics and computing have conflicts of notation. However, once columns &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}&amp;lt;/math&amp;gt; have been defined, the concatenation operation &amp;lt;math&amp;gt;\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]&amp;lt;/math&amp;gt; collects the columns into a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; a = [6;2]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; b = [3;5]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = [a b]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
6 3 &lt;br /&gt;
&lt;br /&gt;
2 5 &lt;br /&gt;
&lt;br /&gt;
Notice that the &amp;lt;code&amp;gt;disp(C)&amp;lt;/code&amp;gt; command does not label the result that is printed out. Simply typing &amp;lt;code&amp;gt;C&amp;lt;/code&amp;gt; would preface the output by &amp;lt;code&amp;gt;C =&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Pre and Post Multiplication ==&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; as above, say that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;pre-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; and that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;post-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This distinction between &amp;#039;&amp;#039;pre &amp;#039;&amp;#039;and &amp;#039;&amp;#039;post &amp;#039;&amp;#039;multiplication is important, in the following sense. Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are matrices such that the products &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined. If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; rows for &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; to be defined. For &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; to be defined, &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; columns to match the &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when both products are defined, there is no reason for the two products coincide. The first thing to notice is that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;m\times m,&amp;lt;/math&amp;gt; matrix, whilst &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; matrix. Different sized matrices cannot be equal. To illustrate, use the matrices: &amp;lt;math&amp;gt;B_{2}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right],\ \ \ C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]:&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B_{2}C &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrr}&lt;br /&gt;
27 &amp;amp; -3 &amp;amp; -15\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-15 &amp;amp; -1 &amp;amp; 8&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
CB_{2} &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
49 &amp;amp; -11\\&lt;br /&gt;
31 &amp;amp; 15&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; matrices, the products can differ: for example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
8 &amp;amp; 4\\&lt;br /&gt;
8 &amp;amp; -2&lt;br /&gt;
\end{array}\right],\ \ \ \ \ BA=\left[\begin{array}{cc}&lt;br /&gt;
9 &amp;amp; 7\\&lt;br /&gt;
3 &amp;amp; -3&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In cases where &amp;lt;math&amp;gt;AB=BA,&amp;lt;/math&amp;gt; the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are said to &amp;#039;&amp;#039;commute&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Transposition ==&lt;br /&gt;
&lt;br /&gt;
A column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; can be converted to a row vector &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by transposition: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ \mathbf{x}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
x_{1} &amp;amp; \ldots &amp;amp; x_{n}\end{array}\right].&amp;lt;/math&amp;gt; Transposing &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;\left(\mathbf{x}^{T}\right)^{T}&amp;lt;/math&amp;gt; reproduces the original vector &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; How do these ideas carry over to matrices?&lt;br /&gt;
&lt;br /&gt;
If the &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right],&amp;lt;/math&amp;gt; the transpose of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; is defined as the matrix whose &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; are &amp;lt;math&amp;gt;\mathbf{a}_{i}^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{c}&lt;br /&gt;
\mathbf{a}_{1}^{T}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mathbf{a}_{n}^{T}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; In terms of elements, if: &amp;lt;math&amp;gt;\mathbf{a}_{i}=\left[\begin{array}{c}&lt;br /&gt;
a_{1i}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{ni}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ A^{T}=\left[\begin{array}{rrrrr}&lt;br /&gt;
a_{11} &amp;amp; \ldots &amp;amp; a_{i1} &amp;amp; \ldots &amp;amp; a_{m1}\\&lt;br /&gt;
a_{12} &amp;amp; \ldots &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{m2}\\&lt;br /&gt;
\vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{1n} &amp;amp; \ldots &amp;amp; a_{in} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; One can see that the first column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; has now become the first row of &amp;lt;math&amp;gt;A^{T}.&amp;lt;/math&amp;gt; Notice too that &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times m&amp;lt;/math&amp;gt; matrix if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix.&lt;br /&gt;
&lt;br /&gt;
Transposing &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; takes the first column of &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; and writes it as a row, which coincides with the first row of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; The same argument applies to the other columns of &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\left(A^{T}\right)^{T}=A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== The product rule for transposition ===&lt;br /&gt;
&lt;br /&gt;
This states that if &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;C^{T}=B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How to see this? Consider the following example: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; b_{13} &amp;amp; b_{14}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; b_{23} &amp;amp; b_{24}\\&lt;br /&gt;
b_{31} &amp;amp; b_{32} &amp;amp; b_{33} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;c_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=\sum_{k=1}^{3}a_{2k}b_{k3}.\label{eq:c23}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that: &amp;lt;math&amp;gt;B^{T}A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
b_{11} &amp;amp; b_{21} &amp;amp; b_{31}\\&lt;br /&gt;
b_{12} &amp;amp; b_{22} &amp;amp; b_{32}\\&lt;br /&gt;
b_{13} &amp;amp; b_{23} &amp;amp; b_{33}\\&lt;br /&gt;
b_{14} &amp;amp; b_{24} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
a_{11} &amp;amp; a_{21}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that the &amp;lt;math&amp;gt;\left(3,2\right)&amp;lt;/math&amp;gt; element of this product is actually &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;b_{13}a_{21}+b_{23}a_{22}+b_{33}a_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=c_{23}.&amp;lt;/math&amp;gt; In summation notation, we see that from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;c_{23}=\sum_{k=1}^{3}b_{k3}a_{2k},&amp;lt;/math&amp;gt; where the position of the index of summation is due to the transposition. So, in summation notation, the calculation of &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; equals that from equation (6).&lt;br /&gt;
&lt;br /&gt;
More generally, the &amp;lt;math&amp;gt;\left(i,j\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\sum_{k=1}^{3}a_{ik}b_{kj}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;\left(j,i\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt; But this means that &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; must be the transpose of &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; since the elements in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; are being written in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This &amp;#039;&amp;#039;Product Rule for Transposition&amp;#039;&amp;#039; can be applied again to find the transpose &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;C^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}=\left(B^{T}A^{T}\right)^{T}=\left(A^{T}\right)^{T}\left(B^{T}\right)^{T}=AB=C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Special Types of Matrix =&lt;br /&gt;
&lt;br /&gt;
== The zero matrix ==&lt;br /&gt;
&lt;br /&gt;
The most obvious special type of matrix is one whose elements are all zeros. In typical element notation, the zero matrix is: &amp;lt;math&amp;gt;0=\left\Vert 0\right\Vert .&amp;lt;/math&amp;gt; Since there is no indexing on the elements, it is not obvious what the dimension of this matrix is, Sometimes one writes &amp;lt;math&amp;gt;0_{mn}&amp;lt;/math&amp;gt; to indicate a zero matrix of dimension &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The same ideas apply to vectors whose elements are all zero.&lt;br /&gt;
&lt;br /&gt;
The effect of the zero matrix in any product that is defined is simple: &amp;lt;math&amp;gt;0A=0,\ \ \ \ \ B0=0.&amp;lt;/math&amp;gt; This is easy to check using the across and down rule.&lt;br /&gt;
&lt;br /&gt;
== The identity or unit matrix ==&lt;br /&gt;
&lt;br /&gt;
Vectors of the form:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }2\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }3\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ldots,\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }n\ \text{dimensions}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
are called coordinate vectors. They are often given a characteristic notation, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; dimensions. When arranged as columns of a matrix in the natural order, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; a matrix with a characteristic pattern elements emerges, with a special notation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{2}\\&lt;br /&gt;
\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \mathbf{e}_{3}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{3}\\&lt;br /&gt;
\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \ldots &amp;amp; \mathbf{e}_{n}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;diagonal&amp;#039;&amp;#039; of this matrix is where the 1 elements are located, and every other element is zero.&lt;br /&gt;
&lt;br /&gt;
Consider the effect of &amp;lt;math&amp;gt;I_{2}&amp;lt;/math&amp;gt; on the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; by both pre and post multiplication:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
I_{2}A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\\&lt;br /&gt;
AI_{2} &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
as is easily checked by the across and down rule.&lt;br /&gt;
&lt;br /&gt;
Because any matrix is left unchanged by pre or post multiplication by an appropriately dimensioned &amp;lt;math&amp;gt;I_{n},&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is called an &amp;#039;&amp;#039;identity matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Sometimes it is called a &amp;#039;&amp;#039;unit matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Notice that &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is necessarily a square matrix.&lt;br /&gt;
&lt;br /&gt;
== Diagonal matrices ==&lt;br /&gt;
&lt;br /&gt;
The identity matrix is an example of a diagonal matrix, a matrix whose elements are all zero except for those on the diagonal. Usually diagonal matrices are taken to be square, for example: &amp;lt;math&amp;gt;D=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; They also produce characteristic effects when pre or post multiplying another matrix.&lt;br /&gt;
&lt;br /&gt;
Consider the diagonal matrix: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and the products &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; as defined in the previous section:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; -4\\&lt;br /&gt;
6 &amp;amp; -10&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
BA &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; 4\\&lt;br /&gt;
-6 &amp;amp; -10&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Comparing the results, we can deduce that post multiplication by a diagonal matrix multiplies each column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by the corresponding diagonal element, whereas pre multiplication multiplies each row by the corresponding diagonal element.&lt;br /&gt;
&lt;br /&gt;
== Symmetric matrices ==&lt;br /&gt;
&lt;br /&gt;
Symmetric matrices are matrices having the property that &amp;lt;math&amp;gt;A=A^{T}.&amp;lt;/math&amp;gt; Notice that such matrices must be square, since if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and to have equality of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; they must have the same dimension, so that &amp;lt;math&amp;gt;m=n&amp;lt;/math&amp;gt; is required.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; symmetric matrix, with typical element &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{21} &amp;amp; a_{31}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22} &amp;amp; a_{32}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equality of matrices is defined as equality of all elements. This is fine on the diagonals, since &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; have the same diagonal elements. For the off diagonal elements, we end up with the requirements: &amp;lt;math&amp;gt;a_{12}=a_{21},\ \ \ a_{13}=a_{31},\ \ \ a_{23}=a_{32}&amp;lt;/math&amp;gt; or more generally: &amp;lt;math&amp;gt;a_{ij}=a_{ji}\ \ \ \ \ \text{for}\ i\neq j.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The effect of this conclusion is that in a symmetric matrix, the ’triangle’ of above diagonal elements coincides with the triangle of below diagonal elements. It is as if the upper triangle is folded over the diagonal to become the lower triangle.&lt;br /&gt;
&lt;br /&gt;
A simple example is: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 2\\&lt;br /&gt;
2 &amp;amp; 1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; A more complicated example uses the &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and calculates the &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C^{T}C &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
45 &amp;amp; 27 &amp;amp; -21\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-21 &amp;amp; -11 &amp;amp; 10&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is clearly symmetric.&lt;br /&gt;
&lt;br /&gt;
This illustrates the general proposition that if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix, the product &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is a symmetric &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix. Proof? Compute the transpose of &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; using the product rule for transposition: &amp;lt;math&amp;gt;\left(A^{T}A\right)^{T}=A^{T}\left(A^{T}\right)^{T}=A^{T}A.&amp;lt;/math&amp;gt; Since &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is equal to its transpose, it must be a symmetric matrix. Such symmetric matrices appear frequently in econometrics.&lt;br /&gt;
&lt;br /&gt;
It should be clear that diagonal matrices are symmetric, since all their off-diagonal elements are equal (zero), and thence the identity matrix &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is also symmetric.&lt;br /&gt;
&lt;br /&gt;
== The outer product ==&lt;br /&gt;
&lt;br /&gt;
The inner product of two &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}&amp;lt;/math&amp;gt;, is automatically a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; quantity, a scalar, although it can be interpreted as a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, a matrix with a single element.&lt;br /&gt;
&lt;br /&gt;
Suppose one considered the product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{x}^{T}.&amp;lt;/math&amp;gt; Is this defined? If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; then the product &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times r.&amp;lt;/math&amp;gt; Applying this logic to &amp;lt;math&amp;gt;\mathbf{xx}^{T},&amp;lt;/math&amp;gt; this is &amp;lt;math&amp;gt;\left(n\times1\right)\left(1\times n\right),&amp;lt;/math&amp;gt; so the resulting product &amp;#039;&amp;#039;is &amp;#039;&amp;#039;defined, and is an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt;&amp;#039;&amp;#039; matrix&amp;#039;&amp;#039; - the &amp;#039;&amp;#039;outer product&amp;#039;&amp;#039; of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; the word ’outer’ being used to distinguish from the inner product.&lt;br /&gt;
&lt;br /&gt;
How does the across and down rule work here? Suppose that: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Then: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right].&amp;lt;/math&amp;gt; Here, there is &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in row one of the ’matrix’ &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in column one of the matrix &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; so the across and down rule still works - it is just that there is only one product per row and column combination. So: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{cc}&lt;br /&gt;
36 &amp;amp; 18\\&lt;br /&gt;
18 &amp;amp; 9&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and it is obvious from this that &amp;lt;math&amp;gt;\mathbf{xx}^{T}&amp;lt;/math&amp;gt; is a symmetric matrix.&lt;br /&gt;
&lt;br /&gt;
One can see that this outer product need not be restricted to vectors of the same dimension. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times1,&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{xy}^{T}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
y_{1} &amp;amp; \ldots &amp;amp; y_{m}\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
x_{1}y_{1} &amp;amp; x_{1}y_{2} &amp;amp; \ldots &amp;amp; x_{1}y_{m}\\&lt;br /&gt;
x_{2}y_{1} &amp;amp; x_{2}y_{2} &amp;amp; \ldots &amp;amp; x_{2}y_{m}\\&lt;br /&gt;
\\&lt;br /&gt;
x_{n}y_{1} &amp;amp; x_{n}y_{2} &amp;amp; \ldots &amp;amp; x_{n}y_{m}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;\mathbf{xy}^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and consists of rows which are &amp;lt;math&amp;gt;\mathbf{y}^{T}&amp;lt;/math&amp;gt; multiplied by an element of the &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Another interesting and useful example involves a vector with every element equal to &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Sometimes this is written as &amp;lt;math&amp;gt;\mathbf{1}_{n}&amp;lt;/math&amp;gt; to indicate an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, and is called the &amp;#039;&amp;#039;sum vector&amp;#039;&amp;#039;. Why? Consider the impact of &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; on the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; used above: &amp;lt;math&amp;gt;\mathbf{1}_{2}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=9,&amp;lt;/math&amp;gt; i.e. an inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with the sum vector is the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; Dividing through by the number of elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; produces the average of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; - i.e. the ’sample mean’ of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The outer product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; is also interesting:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{1}_{2}\mathbf{x}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
6 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x1}_{2}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 6\\&lt;br /&gt;
3 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
showing that pre multiplication of an &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as rows of the product, whilst post multiplication of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}^{T}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as the columns of the product.&lt;br /&gt;
&lt;br /&gt;
Finally: &amp;lt;math&amp;gt;\mathbf{1}_{n}\mathbf{1}_{n}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix with every element equal to &amp;lt;math&amp;gt;1.&amp;lt;/math&amp;gt; This type of matrix also appears in econometrics!&lt;br /&gt;
&lt;br /&gt;
== Triangular matrices ==&lt;br /&gt;
&lt;br /&gt;
A square &amp;#039;&amp;#039;lower triangular &amp;#039;&amp;#039;matrix has all elements above the main diagonal equal to zero, whilst a square &amp;#039;&amp;#039;upper triangular &amp;#039;&amp;#039;matrix has all elements below the main diagonal equal to zero. A simple example of a lower triangular matrix is: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; 0\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Clearly, for this matrix, &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an upper triangular matrix.&lt;br /&gt;
&lt;br /&gt;
One can adapt the definition to rectangular matrices: for example, if two arbitrary rows are added to &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; so that it becomes &amp;lt;math&amp;gt;5\times3,&amp;lt;/math&amp;gt; it would still be considered lower triangular. Equally, if, for example, the third column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; above is removed, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is still considered lower triangular.&lt;br /&gt;
&lt;br /&gt;
Often, we use &amp;#039;&amp;#039;unit &amp;#039;&amp;#039;triangular matrices, where the diagonal elements are all equal to &amp;lt;math&amp;gt;1:&amp;lt;/math&amp;gt; e.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 1\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right].\label{eq:lt_matrix}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Partitioned matrices ==&lt;br /&gt;
&lt;br /&gt;
Sometimes, especially with big matrices, it is useful to organise the elements of the matrix into components which are themselves matrices, for example: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; 3 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 7 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 6 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; Here it would be reasonable to write: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
B_{11} &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; B_{22}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;B_{ii},i=1,2,&amp;lt;/math&amp;gt; represent &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrices. &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is an example of a &amp;#039;&amp;#039;partitioned matrix&amp;#039;&amp;#039;: that is, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; say: &amp;lt;math&amp;gt;A=\left\Vert a_{ij}\right\Vert ,&amp;lt;/math&amp;gt; where the elements of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; are organised into &amp;#039;&amp;#039;sub-matrices&amp;#039;&amp;#039;. An example might be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right],\label{eq:partition_a}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;#039;&amp;#039;sub - matrices&amp;#039;&amp;#039; in the first row block have &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; rows, and therefore &amp;lt;math&amp;gt;m-r&amp;lt;/math&amp;gt; rows in the second row block. The column blocks might be defined by (for example) 3 columns in the first column block, 4 in the second and &amp;lt;math&amp;gt;n-7&amp;lt;/math&amp;gt; in the third column block.&lt;br /&gt;
&lt;br /&gt;
Another simple example might be: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{1} &amp;amp; A_{2} &amp;amp; A_{3}\end{array}\right],\ \ \ \ \ \mathbf{x=}\left[\begin{array}{c}&lt;br /&gt;
\mathbf{x}_{1}\\&lt;br /&gt;
\mathbf{x}_{2}\\&lt;br /&gt;
\mathbf{x}_{3}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and therefore &amp;lt;math&amp;gt;A_{1},A_{2},A_{3}&amp;lt;/math&amp;gt; have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows, &amp;lt;math&amp;gt;A_{1}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{2}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{3}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; columns. The &amp;#039;&amp;#039;subvectors&amp;#039;&amp;#039; in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n_{1},n_{2}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; rows respectively, for the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; to exist.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;n_{1}+n_{2}+n_{3}=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\sum_{i=1}^{n}a_{ij}x_{j}&amp;lt;/math&amp;gt; but the summation can be broken up into the first &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=1}^{n_{1}}a_{ij}x_{j},&amp;lt;/math&amp;gt; the next &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=n_{1}+1}^{n_{1}+n_{2}}a_{ij}x_{j},&amp;lt;/math&amp;gt; and the next &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; terms; &amp;lt;math&amp;gt;\sum_{i=n_{1}+n_{2}+1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The point about the use of partitioned matrices is that the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; can be represented as: &amp;lt;math&amp;gt;A\mathbf{x}=A_{1}\mathbf{x}_{1}+A_{2}\mathbf{x}_{2}+A\mathbf{x}_{3}&amp;lt;/math&amp;gt; by applying the across and down rule to the submatrices and the subvectors, a much simpler representation than the use of summations.&lt;br /&gt;
&lt;br /&gt;
Each of the components is a conformable matrix-vector product: this is essential in any use of partitioned matrices to represent some matrix product. For example, using &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; from equation (8) and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;B=\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is easy to write:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
A_{11}B_{11}+A_{12}B_{21}+A_{13}B_{31}\\&lt;br /&gt;
A_{21}B_{11}+A_{22}B_{21}+A_{23}B_{31}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
But, what are the row dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt; What are the possible column dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Matrices, vectors and econometrics =&lt;br /&gt;
&lt;br /&gt;
The data on weights and heights for 12 students in the data matrix: &amp;lt;math&amp;gt;D=\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; would seem to be ideally suited for fitting a two variable regression model: &amp;lt;math&amp;gt;y_{i}=\alpha+\beta x_{i}+u_{i},\;\;\;\;\; i=1,...,12.&amp;lt;/math&amp;gt; Here, the first column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the weight data, the data on the dependent variable &amp;lt;math&amp;gt;y_{i},&amp;lt;/math&amp;gt; and so should be labelled &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; The second column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the data on the explanatory variable height, in the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; say, so that: &amp;lt;math&amp;gt;D=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{y} &amp;amp; \mathbf{x}\end{array}\right].&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we define a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector with every element &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}_{12}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{u}&amp;lt;/math&amp;gt; to contain the error terms; &amp;lt;math&amp;gt;\mathbf{u}=\left[\begin{array}{c}&lt;br /&gt;
u_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
u_{12}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; the regression model can be written in terms of the three data vectors &amp;lt;math&amp;gt;\mathbf{y,1}_{12}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\mathbf{y}=\mathbf{1}_{12}\alpha+\mathbf{x}\beta+\mathbf{u.}&amp;lt;/math&amp;gt; To see this, think of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th elements of the vectors on the left and right hand sides.&lt;br /&gt;
&lt;br /&gt;
The standard next step is then to combine the data vectors for the explanatory variables into a matrix: &amp;lt;math&amp;gt;X=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{1}_{12} &amp;amp; \mathbf{x}\end{array}\right],&amp;lt;/math&amp;gt; and then define a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\boldsymbol{\delta}&amp;lt;/math&amp;gt; to contain the parameters &amp;lt;math&amp;gt;\alpha,\beta&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\boldsymbol{\delta}=\left[\begin{array}{r}&lt;br /&gt;
\alpha\\&lt;br /&gt;
\beta&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; to give the data matrix representation of the regression model as: &amp;lt;math&amp;gt;\mathbf{y}=X\boldsymbol{\delta}+\mathbf{u.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the purposes of developing the theory of regression, this is the most convenient form of the regression model. It can represent regression models with any number of explanatory variables, and thus any number of parameters. The obvious point is that a knowledge of vector and matrix operations is needed to use and understand this form.&lt;br /&gt;
&lt;br /&gt;
We shall see later that there are two particular matrix and vector quantities associated with a regression model. The first is the matrix &amp;lt;math&amp;gt;X^{T}X,&amp;lt;/math&amp;gt; and the second the vector &amp;lt;math&amp;gt;X^{T}\mathbf{y.}&amp;lt;/math&amp;gt; The following Matlab code snippet provides the numerical values of these quantities for the weight data:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; dset = load(’weights.mat’); &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xtx = dset.X’ * dset.X; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xty = dset.X’ * dset.y; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xtx) &lt;br /&gt;
&lt;br /&gt;
 12     802&lt;br /&gt;
&lt;br /&gt;
802   53792&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xty)&lt;br /&gt;
&lt;br /&gt;
  1850&lt;br /&gt;
&lt;br /&gt;
124528&lt;br /&gt;
&lt;br /&gt;
Hand calculation is of course possible, but not recommended.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=LNotes&amp;diff=3026</id>
		<title>LNotes</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=LNotes&amp;diff=3026"/>
				<updated>2013-09-10T14:10:16Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Matrices =&lt;br /&gt;
&lt;br /&gt;
In the PreSession Maths course, a matrix was defined as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;A matrix is a rectangular array of numbers enclosed in parentheses, conventionally denoted by a capital letter. The number of rows (say &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt;) and&lt;br /&gt;
&lt;br /&gt;
the number of columns (say &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;) determine the order of the matrix (&amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\times&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;).&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
Two examples were given:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
P &amp;amp; =\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 3 &amp;amp; 4\\&lt;br /&gt;
3 &amp;amp; 1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ Q=\left[\begin{array}{rr}&lt;br /&gt;
2 &amp;amp; 3\\&lt;br /&gt;
4 &amp;amp; 3\\&lt;br /&gt;
1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
matrices of dimensions &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;3\times2&amp;lt;/math&amp;gt; respectively.&lt;br /&gt;
&lt;br /&gt;
Why study matrices for econometrics? Basically because a data set of several variables, e.g. on the weights and heights of 12 students, can be thought of as a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
D &amp;amp; =\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The properties of matrices can then be used to facilitate answering all the usual questions of econometrics - list not given here!&lt;br /&gt;
&lt;br /&gt;
Calculations with matrices with explicit numerical elements, as in the examples above is called matrix &amp;#039;&amp;#039;arithmetic&amp;#039;&amp;#039;. Matrix &amp;#039;&amp;#039;algebra&amp;#039;&amp;#039; is the algebra of matrices where the elements are not made explicit: this is what is really required for econometrics, as we shall see.&lt;br /&gt;
&lt;br /&gt;
As an example of this, a &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix might be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{ccc}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and would equal &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; above if the collection of &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; were given appropriate numerical values.&lt;br /&gt;
&lt;br /&gt;
A general &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is also a &amp;#039;&amp;#039;typical element &amp;#039;&amp;#039;notation for matrices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left\Vert a_{ij}\right\Vert ,\ \ \ \ \ i=1,...,m,j=1,...,n,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; is the element at the intersection of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row and &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th column in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;m\neq n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;#039;&amp;#039;rectangular &amp;#039;&amp;#039;matrix; when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a square matrix, having the same number of rows or columns.&lt;br /&gt;
&lt;br /&gt;
== Rows, columns and vectors ==&lt;br /&gt;
&lt;br /&gt;
Clearly, there is no reason why &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; cannot equal 1: so, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix with &amp;lt;math&amp;gt;n=1,&amp;lt;/math&amp;gt; i.e. with one column, is usually called a column vector. Similarly, a matrix with one row is a row vector.&lt;br /&gt;
&lt;br /&gt;
There are a lot of advantages to thinking of matrices as collections of row or column vectors, as we shall see. As an example, define the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; column vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\mathbf{,\ \ \ b}=\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and arrange as the columns of the &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\label{eq:axy}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, a column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; elements can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What happens when both &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; are equal to &amp;lt;math&amp;gt;1?&amp;lt;/math&amp;gt; Then, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, but it is also considered to be a real number, or &amp;#039;&amp;#039;scalar&amp;#039;&amp;#039; in the language of linear algebra:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[a_{11}\right]=a_{11}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is perhaps a little odd, but turns out to be a useful convention in a number of situations.&lt;br /&gt;
&lt;br /&gt;
== Transposition of vectors ==&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; in equation (1) can be seen as elements of column vectors, say:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],\ \ \ \boldsymbol{d}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This representation of row vectors as column vectors is a bit clumsy, so some transformation which converts a column vector into a row vector, and vice versa would be useful. The process of converting a column vector &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; into a row vector is called &amp;#039;&amp;#039;transposition, &amp;#039;&amp;#039;and the transposed version of &amp;lt;math&amp;gt;\mathbf{c}&amp;lt;/math&amp;gt; is denoted:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c}^{T} &amp;amp; =\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; superscript denoting transposition. In practice, a prime, &amp;lt;math&amp;gt;^{\prime},&amp;lt;/math&amp;gt; is used instead of &amp;lt;math&amp;gt;^{T}.&amp;lt;/math&amp;gt; However, whilst the prime is much simpler to write than the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; sign, it is also much easier to lose track of in writing out long or complicated expressions. So, it is best initially to use &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; to denote transposition rather than the prime &amp;lt;math&amp;gt;^{\prime}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can then be written via its rows as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
\mathbf{c}^{T}\\&lt;br /&gt;
\boldsymbol{d}^{T}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The same ideas can be applied to the matrices &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Q.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Operations with matrices =&lt;br /&gt;
&lt;br /&gt;
== Addition, subtraction and scalar multiplication ==&lt;br /&gt;
&lt;br /&gt;
For vectors, addition and subtraction are defined only for vectors of the same dimensions. If:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
y_{n}&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x+y} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}+y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}+y_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{x-y}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}-y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}-y_{n}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clearly, the addition or subtraction operation is &amp;#039;&amp;#039;elementwise. &amp;#039;&amp;#039;If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; have different dimensions, there will be some elements left over once all the elements of the smaller dimensioned vector have been used up.&lt;br /&gt;
&lt;br /&gt;
Another operation is &amp;#039;&amp;#039;scalar multiplication&amp;#039;&amp;#039;: if &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; is a real number or scalar, the product &amp;lt;math&amp;gt;\lambda\mathbf{x}&amp;lt;/math&amp;gt; is defined as: &amp;lt;math&amp;gt;\lambda\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that every element of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is multiplied by the same scalar &amp;lt;math&amp;gt;\lambda.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The two types of operation can be combined into the &amp;#039;&amp;#039;linear combination&amp;#039;&amp;#039; of vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right]+\left[\begin{array}{c}&lt;br /&gt;
\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mu y_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}+\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}+\mu y_{n}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equally, one can define the linear combination of vectors &amp;lt;math&amp;gt;\mathbf{x,y,}\ldots,\mathbf{z}&amp;lt;/math&amp;gt; by scalars &amp;lt;math&amp;gt;\lambda,\mu,\ldots,\nu&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}+\ldots+\nu\mathbf{z}&amp;lt;/math&amp;gt; with typical element: &amp;lt;math&amp;gt;\lambda x_{i}+\mu y_{i}+\ldots+\nu z_{i},&amp;lt;/math&amp;gt; provided that all the vectors have the same dimension.&lt;br /&gt;
&lt;br /&gt;
For matrices, these ideas carry over immediately: apply to each column of the matrices involved. For example, if &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{b}_{n}\end{array}\right],&amp;lt;/math&amp;gt; both &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; then addition and subtraction are defined elementwise, as for vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A+B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}+\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}+\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}+b_{ij}\right\Vert ,\\&lt;br /&gt;
A-B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}-\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}-\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}-b_{ij}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scalar multiplication of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; involves multiplying every column vector of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda,&amp;lt;/math&amp;gt; and therefore multiplying every element of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda A=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}\right\Vert .&amp;lt;/math&amp;gt; With the same idea for &amp;lt;math&amp;gt;B,&amp;lt;/math&amp;gt; the linear combination of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mu&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\lambda A+\mu B=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1}+\mu\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}+\mu\mathbf{b}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}+\mu b_{ij}\right\Vert .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, consider the matrices: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\lambda=1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mu=-2:&amp;lt;/math&amp;gt; then:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\lambda A+\mu B &amp;amp; = &amp;amp; A-2B\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
4 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; 7&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - vector products ==&lt;br /&gt;
&lt;br /&gt;
=== Inner product ===&lt;br /&gt;
&lt;br /&gt;
The simplest form of a matrix vector product is the case where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; consists of one row, so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;1\times n&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\mathbf{a}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right].&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the product &amp;lt;math&amp;gt;A\mathbf{x}=\mathbf{a}^{T}\mathbf{x}&amp;lt;/math&amp;gt; is called the &amp;#039;&amp;#039;inner product&amp;#039;&amp;#039; and is defined as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a}^{T}\mathbf{x} &amp;amp; =a_{1}x_{1}+\ldots+a_{n}x_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that the definition amounts to multiplying corresponding elements in &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and adding up the resultant products. Writing: &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x=}\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=a_{1}x_{1}+\ldots+a_{n}x_{n}&amp;lt;/math&amp;gt; motivates the familiar description of the &amp;#039;&amp;#039;across and down rule &amp;#039;&amp;#039;for this product: &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; is the ’multiply corresponding elements’ part of the definition.&lt;br /&gt;
&lt;br /&gt;
Notice that the result of the inner product is a real number, for example: &amp;lt;math&amp;gt;\mathbf{c}^{T}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{c}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=36+6=42.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, in the product &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x,}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have the same number of elements, &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; say, for the product to be defined. If &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; had different numbers of elements, there would be some elements of &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; left over or not used in the product: e.g.: &amp;lt;math&amp;gt;\mathbf{b}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{x=}\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; When the inner product of two vectors is defined, the vectors are said to be &amp;#039;&amp;#039;conformable&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Orthogonality ==&lt;br /&gt;
&lt;br /&gt;
Two vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; with the property that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0&amp;lt;/math&amp;gt; are said to be orthogonal to each other. For example, if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
-1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is clear that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0.&amp;lt;/math&amp;gt; This seems a rather innocuous definition, and yet the idea of orthogonality turns out to be extremely important in econometrics.&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; are thought of as points in &amp;lt;math&amp;gt;R^{2},&amp;lt;/math&amp;gt; and arrows are drawn from the origin to &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and to &amp;lt;math&amp;gt;\mathbf{y,}&amp;lt;/math&amp;gt; then the two arrows are perpendicular to each other - see Figure [orthy&amp;lt;sub&amp;gt;e&amp;lt;/sub&amp;gt;xample]. If &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; were defined as: &amp;lt;math&amp;gt;\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the position of the &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; vector and the corresponding arrow would change, but the perpendicularity property would still hold.&lt;br /&gt;
&lt;br /&gt;
Figure 1:&lt;br /&gt;
[[Media:orthy_example.png]]&lt;br /&gt;
[[File:orthy_example.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Matrix - vector products ===&lt;br /&gt;
&lt;br /&gt;
Since the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; has two rows, now denoted &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{1}^{T}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{2}^{T},&amp;lt;/math&amp;gt; there are two possible inner products with the vector:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]:\\&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x} &amp;amp; = &amp;amp; 42,\ \ \ \ \ \boldsymbol{\alpha}_{2}^{T}\mathbf{x}=33.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assembling the two inner product values into a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector defines the product of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; with the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x}\\&lt;br /&gt;
\boldsymbol{\alpha}_{2}^{T}\mathbf{x}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Focussing only on the part: &amp;lt;math&amp;gt;\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; one can see that each element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is obtained from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; argument.&lt;br /&gt;
&lt;br /&gt;
Sometimes this product is described as forming a &amp;#039;&amp;#039;linear combination &amp;#039;&amp;#039;of the columns of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; using the scalar elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=6\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]+3\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; More generally, if:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right],\ \ \ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
\lambda\\&lt;br /&gt;
\mu&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
A\mathbf{x} &amp;amp; = &amp;amp; \lambda\mathbf{a}+\mu\mathbf{b.}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The general version of these ideas for an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \mathbf{a}_{2} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right].&amp;lt;/math&amp;gt; is straightforward. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, then the vector &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is, by the &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
a_{11}x_{1}+\ldots+a_{1n}x_{n}\\&lt;br /&gt;
a_{21}x_{1}+\ldots+a_{2n}x_{n}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{m1}x_{1}+\ldots+a_{mn}x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{1j}x_{j}\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{2j}x_{j}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{mj}x_{j}&lt;br /&gt;
\end{array}\right],\label{eq:ab}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that the typical element, the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th, is &amp;lt;math&amp;gt;\sum\limits _{j=1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt; Equally, &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is the linear combination &amp;lt;math&amp;gt;\mathbf{a}_{1}x_{1}+\ldots+\mathbf{a}_{n}x_{n}&amp;lt;/math&amp;gt; of the columns of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - matrix products ==&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{a}_{1},\ldots,\mathbf{a}_{n},&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{b}_{1},\ldots,\mathbf{b}_{r}.&amp;lt;/math&amp;gt; Clearly, each product &amp;lt;math&amp;gt;A\mathbf{b}_{1},...,A\mathbf{b}_{r}&amp;lt;/math&amp;gt; exists, and is &amp;lt;math&amp;gt;m\times1.&amp;lt;/math&amp;gt; These products can be arranged as the columns of a matrix as &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]&amp;lt;/math&amp;gt; and this matrix is &amp;#039;&amp;#039;defined&amp;#039;&amp;#039; to be the product &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; of the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]=AB.&amp;lt;/math&amp;gt; By construction, this must be an &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix, since each column is &amp;lt;math&amp;gt;m\times1&amp;lt;/math&amp;gt; and there are &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; columns.&lt;br /&gt;
&lt;br /&gt;
This is not the usual presentation of the definition of the product of two matrices, which relies on the &amp;#039;&amp;#039;across and down rule&amp;#039;&amp;#039; mentioned earlier, and focusses on the elements of each matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt; Set:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \mathbf{b}_{2} &amp;amp; \ldots &amp;amp; \mathbf{b}_{r}\end{array}\right]\text{\ \ \ \ \ \ \ (by columns)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert b_{ik}\right\Vert ,\ \ \ \ \ i=1,...,n,k=1,...,r\text{ \ \ \ \ \ \ (typical element)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\text{\ \ \ \ \ \ \ (the array)}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What does the typical element of the &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; look like? Start with the &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; which is &amp;lt;math&amp;gt;A\mathbf{b}_{k}.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element in &amp;lt;math&amp;gt;A\mathbf{b}_{k}&amp;lt;/math&amp;gt; is, from equation (2), the inner product of the elements of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\end{array}\right],&amp;lt;/math&amp;gt; with the elements of &amp;lt;math&amp;gt;\mathbf{b}_{k},&amp;lt;/math&amp;gt; so that the inner product is: &amp;lt;math&amp;gt;a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, the &amp;lt;math&amp;gt;ik&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;c_{ik}=a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt; We can see this arising from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; calculation by writing:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\label{eq:c_ab}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1k} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2k} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nk} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert \sum_{j=1}^{n}a_{ij}b_{jk}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These ideas are simple, but a little tedious. Numerical examples are equally tedious! As an example, using: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; we can find the matrix &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; such that&lt;br /&gt;
&lt;br /&gt;
# the first column of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; adds together the columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the second column is the difference of the first and second columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the third column is &amp;lt;math&amp;gt;2\times&amp;lt;/math&amp;gt; the first column of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the fourth column is zero.&lt;br /&gt;
&lt;br /&gt;
It is easy to check that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cccc}&lt;br /&gt;
8 &amp;amp; 4 &amp;amp; 12 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; -2 &amp;amp; 6 &amp;amp; 0&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Arithmetic calculations of matrix products almost always use the elementwise across and down formula. However, there are many situations in econometrics where algebraic rather than arithmetic arguments are required. In these cases, the viewpoint of matrix multiplication as linear combinations of columns is much more powerful.&lt;br /&gt;
&lt;br /&gt;
Clearly one can give many more examples of different dimensions and complexities - but the same basic rules apply. To multiply two matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; together, the number of columns in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; must match the number of rows in &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; - this is &amp;#039;&amp;#039;conformability&amp;#039;&amp;#039; in action again. The resulting product will have number of rows equal to the number in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and number of columns equal to the number in &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this conformability rule does not hold, then the product of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is not defined.&lt;br /&gt;
&lt;br /&gt;
== Matlab ==&lt;br /&gt;
&lt;br /&gt;
One should also say that as the dimensions of the matrices increases, so the tediousness of the calculations increase. The solution to this for numerical calculation is to appeal to the computer. Programs like Matlab and Excel (and a number of others, some of them free) resolve this difficulty easily.&lt;br /&gt;
&lt;br /&gt;
In Matlab, symbols for row or column vectors do not need any particular differentiation: they are distinguished by how they are defined. For example, the following Matlab commands define &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;as a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; as a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector, then display the contents of these variables, and do a calculation:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec = [1 2 3 4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec = [1;2;3;4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec&lt;br /&gt;
&lt;br /&gt;
rowvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec&lt;br /&gt;
&lt;br /&gt;
colvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec*colvec&lt;br /&gt;
&lt;br /&gt;
ans =&lt;br /&gt;
&lt;br /&gt;
30 &lt;br /&gt;
&lt;br /&gt;
So, the semi-colon indicates the end of a row in a matrix or vector; it can be replaced by a carriage return. Notice the difference in how a row vector and a column vector is defined. One can see that the product &amp;lt;code&amp;gt;rowvec*colvec&amp;lt;/code&amp;gt; is well defined, just because &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Matlab also allows elementwise multiplication of two vectors using the &amp;lt;math&amp;gt;\centerdot\ast&amp;lt;/math&amp;gt; operator: if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
y_{2}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{x}\centerdot\ast\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}y_{1}\\&lt;br /&gt;
x_{2}y_{2}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and one can see that the inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; can be obtained as the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; In Matlab, this would be obtained as: &amp;lt;math&amp;gt;\text{sum}\left(\mathbf{x}\centerdot\ast\mathbf{y}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the example above, this calculation fails since &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; sum(rowvec .* colvec) ??? &lt;br /&gt;
&lt;br /&gt;
Error using ==&amp;amp;gt; times Matrix dimensions must agree. &lt;br /&gt;
&lt;br /&gt;
For this to work, &amp;lt;code&amp;gt;rowvec&amp;lt;/code&amp;gt; would have to be transposed as &amp;lt;code&amp;gt;rowvec’&amp;lt;/code&amp;gt;, so that transposition in Matlab is very natural.&lt;br /&gt;
&lt;br /&gt;
Allowing for such difficulties, matrix multiplication in Matlab is very simple:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; A = [6 2; 3 5];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; B = [1 1 2 0;1 -1 0 0];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = A * B; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
 8 4 1&lt;br /&gt;
&lt;br /&gt;
 2 0 8 &lt;br /&gt;
&lt;br /&gt;
-2 6 0 &lt;br /&gt;
&lt;br /&gt;
Notice how the matrices are defined here through their rows. The &amp;lt;code&amp;gt;disp() &amp;lt;/code&amp;gt;command displays the contents of the object referred to.&lt;br /&gt;
&lt;br /&gt;
It is less natural in Matlab to define matrices by columns - a typical example of how mathematics and computing have conflicts of notation. However, once columns &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}&amp;lt;/math&amp;gt; have been defined, the concatenation operation &amp;lt;math&amp;gt;\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]&amp;lt;/math&amp;gt; collects the columns into a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; a = [6;2]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; b = [3;5]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = [a b]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
6 3 &lt;br /&gt;
&lt;br /&gt;
2 5 &lt;br /&gt;
&lt;br /&gt;
Notice that the &amp;lt;code&amp;gt;disp(C)&amp;lt;/code&amp;gt; command does not label the result that is printed out. Simply typing &amp;lt;code&amp;gt;C&amp;lt;/code&amp;gt; would preface the output by &amp;lt;code&amp;gt;C =&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Pre and Post Multiplication ==&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; as above, say that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;pre-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; and that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;post-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This distinction between &amp;#039;&amp;#039;pre &amp;#039;&amp;#039;and &amp;#039;&amp;#039;post &amp;#039;&amp;#039;multiplication is important, in the following sense. Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are matrices such that the products &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined. If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; rows for &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; to be defined. For &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; to be defined, &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; columns to match the &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when both products are defined, there is no reason for the two products coincide. The first thing to notice is that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;m\times m,&amp;lt;/math&amp;gt; matrix, whilst &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; matrix. Different sized matrices cannot be equal. To illustrate, use the matrices: &amp;lt;math&amp;gt;B_{2}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right],\ \ \ C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]:&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B_{2}C &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrr}&lt;br /&gt;
27 &amp;amp; -3 &amp;amp; -15\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-15 &amp;amp; -1 &amp;amp; 8&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
CB_{2} &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
49 &amp;amp; -11\\&lt;br /&gt;
31 &amp;amp; 15&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; matrices, the products can differ: for example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
8 &amp;amp; 4\\&lt;br /&gt;
8 &amp;amp; -2&lt;br /&gt;
\end{array}\right],\ \ \ \ \ BA=\left[\begin{array}{cc}&lt;br /&gt;
9 &amp;amp; 7\\&lt;br /&gt;
3 &amp;amp; -3&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In cases where &amp;lt;math&amp;gt;AB=BA,&amp;lt;/math&amp;gt; the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are said to &amp;#039;&amp;#039;commute&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Transposition ==&lt;br /&gt;
&lt;br /&gt;
A column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; can be converted to a row vector &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by transposition: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ \mathbf{x}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
x_{1} &amp;amp; \ldots &amp;amp; x_{n}\end{array}\right].&amp;lt;/math&amp;gt; Transposing &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;\left(\mathbf{x}^{T}\right)^{T}&amp;lt;/math&amp;gt; reproduces the original vector &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; How do these ideas carry over to matrices?&lt;br /&gt;
&lt;br /&gt;
If the &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right],&amp;lt;/math&amp;gt; the transpose of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; is defined as the matrix whose &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; are &amp;lt;math&amp;gt;\mathbf{a}_{i}^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{c}&lt;br /&gt;
\mathbf{a}_{1}^{T}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mathbf{a}_{n}^{T}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; In terms of elements, if: &amp;lt;math&amp;gt;\mathbf{a}_{i}=\left[\begin{array}{c}&lt;br /&gt;
a_{1i}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{ni}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ A^{T}=\left[\begin{array}{rrrrr}&lt;br /&gt;
a_{11} &amp;amp; \ldots &amp;amp; a_{i1} &amp;amp; \ldots &amp;amp; a_{m1}\\&lt;br /&gt;
a_{12} &amp;amp; \ldots &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{m2}\\&lt;br /&gt;
\vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{1n} &amp;amp; \ldots &amp;amp; a_{in} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; One can see that the first column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; has now become the first row of &amp;lt;math&amp;gt;A^{T}.&amp;lt;/math&amp;gt; Notice too that &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times m&amp;lt;/math&amp;gt; matrix if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix.&lt;br /&gt;
&lt;br /&gt;
Transposing &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; takes the first column of &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; and writes it as a row, which coincides with the first row of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; The same argument applies to the other columns of &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\left(A^{T}\right)^{T}=A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== The product rule for transposition ===&lt;br /&gt;
&lt;br /&gt;
This states that if &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;C^{T}=B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How to see this? Consider the following example: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; b_{13} &amp;amp; b_{14}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; b_{23} &amp;amp; b_{24}\\&lt;br /&gt;
b_{31} &amp;amp; b_{32} &amp;amp; b_{33} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;c_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=\sum_{k=1}^{3}a_{2k}b_{k3}.\label{eq:c23}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that: &amp;lt;math&amp;gt;B^{T}A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
b_{11} &amp;amp; b_{21} &amp;amp; b_{31}\\&lt;br /&gt;
b_{12} &amp;amp; b_{22} &amp;amp; b_{32}\\&lt;br /&gt;
b_{13} &amp;amp; b_{23} &amp;amp; b_{33}\\&lt;br /&gt;
b_{14} &amp;amp; b_{24} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
a_{11} &amp;amp; a_{21}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that the &amp;lt;math&amp;gt;\left(3,2\right)&amp;lt;/math&amp;gt; element of this product is actually &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;b_{13}a_{21}+b_{23}a_{22}+b_{33}a_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=c_{23}.&amp;lt;/math&amp;gt; In summation notation, we see that from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;c_{23}=\sum_{k=1}^{3}b_{k3}a_{2k},&amp;lt;/math&amp;gt; where the position of the index of summation is due to the transposition. So, in summation notation, the calculation of &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; equals that from equation (6).&lt;br /&gt;
&lt;br /&gt;
More generally, the &amp;lt;math&amp;gt;\left(i,j\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\sum_{k=1}^{3}a_{ik}b_{kj}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;\left(j,i\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt; But this means that &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; must be the transpose of &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; since the elements in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; are being written in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This &amp;#039;&amp;#039;Product Rule for Transposition&amp;#039;&amp;#039; can be applied again to find the transpose &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;C^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}=\left(B^{T}A^{T}\right)^{T}=\left(A^{T}\right)^{T}\left(B^{T}\right)^{T}=AB=C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Special Types of Matrix =&lt;br /&gt;
&lt;br /&gt;
== The zero matrix ==&lt;br /&gt;
&lt;br /&gt;
The most obvious special type of matrix is one whose elements are all zeros. In typical element notation, the zero matrix is: &amp;lt;math&amp;gt;0=\left\Vert 0\right\Vert .&amp;lt;/math&amp;gt; Since there is no indexing on the elements, it is not obvious what the dimension of this matrix is, Sometimes one writes &amp;lt;math&amp;gt;0_{mn}&amp;lt;/math&amp;gt; to indicate a zero matrix of dimension &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The same ideas apply to vectors whose elements are all zero.&lt;br /&gt;
&lt;br /&gt;
The effect of the zero matrix in any product that is defined is simple: &amp;lt;math&amp;gt;0A=0,\ \ \ \ \ B0=0.&amp;lt;/math&amp;gt; This is easy to check using the across and down rule.&lt;br /&gt;
&lt;br /&gt;
== The identity or unit matrix ==&lt;br /&gt;
&lt;br /&gt;
Vectors of the form:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }2\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }3\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ldots,\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }n\ \text{dimensions}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
are called coordinate vectors. They are often given a characteristic notation, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; dimensions. When arranged as columns of a matrix in the natural order, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; a matrix with a characteristic pattern elements emerges, with a special notation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{2}\\&lt;br /&gt;
\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \mathbf{e}_{3}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{3}\\&lt;br /&gt;
\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \ldots &amp;amp; \mathbf{e}_{n}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;diagonal&amp;#039;&amp;#039; of this matrix is where the 1 elements are located, and every other element is zero.&lt;br /&gt;
&lt;br /&gt;
Consider the effect of &amp;lt;math&amp;gt;I_{2}&amp;lt;/math&amp;gt; on the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; by both pre and post multiplication:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
I_{2}A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\\&lt;br /&gt;
AI_{2} &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
as is easily checked by the across and down rule.&lt;br /&gt;
&lt;br /&gt;
Because any matrix is left unchanged by pre or post multiplication by an appropriately dimensioned &amp;lt;math&amp;gt;I_{n},&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is called an &amp;#039;&amp;#039;identity matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Sometimes it is called a &amp;#039;&amp;#039;unit matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Notice that &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is necessarily a square matrix.&lt;br /&gt;
&lt;br /&gt;
== Diagonal matrices ==&lt;br /&gt;
&lt;br /&gt;
The identity matrix is an example of a diagonal matrix, a matrix whose elements are all zero except for those on the diagonal. Usually diagonal matrices are taken to be square, for example: &amp;lt;math&amp;gt;D=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; They also produce characteristic effects when pre or post multiplying another matrix.&lt;br /&gt;
&lt;br /&gt;
Consider the diagonal matrix: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and the products &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; as defined in the previous section:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; -4\\&lt;br /&gt;
6 &amp;amp; -10&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
BA &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; 4\\&lt;br /&gt;
-6 &amp;amp; -10&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Comparing the results, we can deduce that post multiplication by a diagonal matrix multiplies each column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by the corresponding diagonal element, whereas pre multiplication multiplies each row by the corresponding diagonal element.&lt;br /&gt;
&lt;br /&gt;
== Symmetric matrices ==&lt;br /&gt;
&lt;br /&gt;
Symmetric matrices are matrices having the property that &amp;lt;math&amp;gt;A=A^{T}.&amp;lt;/math&amp;gt; Notice that such matrices must be square, since if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and to have equality of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; they must have the same dimension, so that &amp;lt;math&amp;gt;m=n&amp;lt;/math&amp;gt; is required.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; symmetric matrix, with typical element &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{21} &amp;amp; a_{31}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22} &amp;amp; a_{32}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equality of matrices is defined as equality of all elements. This is fine on the diagonals, since &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; have the same diagonal elements. For the off diagonal elements, we end up with the requirements: &amp;lt;math&amp;gt;a_{12}=a_{21},\ \ \ a_{13}=a_{31},\ \ \ a_{23}=a_{32}&amp;lt;/math&amp;gt; or more generally: &amp;lt;math&amp;gt;a_{ij}=a_{ji}\ \ \ \ \ \text{for}\ i\neq j.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The effect of this conclusion is that in a symmetric matrix, the ’triangle’ of above diagonal elements coincides with the triangle of below diagonal elements. It is as if the upper triangle is folded over the diagonal to become the lower triangle.&lt;br /&gt;
&lt;br /&gt;
A simple example is: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 2\\&lt;br /&gt;
2 &amp;amp; 1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; A more complicated example uses the &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and calculates the &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C^{T}C &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
45 &amp;amp; 27 &amp;amp; -21\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-21 &amp;amp; -11 &amp;amp; 10&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is clearly symmetric.&lt;br /&gt;
&lt;br /&gt;
This illustrates the general proposition that if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix, the product &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is a symmetric &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix. Proof? Compute the transpose of &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; using the product rule for transposition: &amp;lt;math&amp;gt;\left(A^{T}A\right)^{T}=A^{T}\left(A^{T}\right)^{T}=A^{T}A.&amp;lt;/math&amp;gt; Since &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is equal to its transpose, it must be a symmetric matrix. Such symmetric matrices appear frequently in econometrics.&lt;br /&gt;
&lt;br /&gt;
It should be clear that diagonal matrices are symmetric, since all their off-diagonal elements are equal (zero), and thence the identity matrix &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is also symmetric.&lt;br /&gt;
&lt;br /&gt;
== The outer product ==&lt;br /&gt;
&lt;br /&gt;
The inner product of two &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}&amp;lt;/math&amp;gt;, is automatically a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; quantity, a scalar, although it can be interpreted as a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, a matrix with a single element.&lt;br /&gt;
&lt;br /&gt;
Suppose one considered the product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{x}^{T}.&amp;lt;/math&amp;gt; Is this defined? If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; then the product &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times r.&amp;lt;/math&amp;gt; Applying this logic to &amp;lt;math&amp;gt;\mathbf{xx}^{T},&amp;lt;/math&amp;gt; this is &amp;lt;math&amp;gt;\left(n\times1\right)\left(1\times n\right),&amp;lt;/math&amp;gt; so the resulting product &amp;#039;&amp;#039;is &amp;#039;&amp;#039;defined, and is an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt;&amp;#039;&amp;#039; matrix&amp;#039;&amp;#039; - the &amp;#039;&amp;#039;outer product&amp;#039;&amp;#039; of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; the word ’outer’ being used to distinguish from the inner product.&lt;br /&gt;
&lt;br /&gt;
How does the across and down rule work here? Suppose that: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Then: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right].&amp;lt;/math&amp;gt; Here, there is &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in row one of the ’matrix’ &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in column one of the matrix &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; so the across and down rule still works - it is just that there is only one product per row and column combination. So: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{cc}&lt;br /&gt;
36 &amp;amp; 18\\&lt;br /&gt;
18 &amp;amp; 9&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and it is obvious from this that &amp;lt;math&amp;gt;\mathbf{xx}^{T}&amp;lt;/math&amp;gt; is a symmetric matrix.&lt;br /&gt;
&lt;br /&gt;
One can see that this outer product need not be restricted to vectors of the same dimension. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times1,&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{xy}^{T}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
y_{1} &amp;amp; \ldots &amp;amp; y_{m}\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
x_{1}y_{1} &amp;amp; x_{1}y_{2} &amp;amp; \ldots &amp;amp; x_{1}y_{m}\\&lt;br /&gt;
x_{2}y_{1} &amp;amp; x_{2}y_{2} &amp;amp; \ldots &amp;amp; x_{2}y_{m}\\&lt;br /&gt;
\\&lt;br /&gt;
x_{n}y_{1} &amp;amp; x_{n}y_{2} &amp;amp; \ldots &amp;amp; x_{n}y_{m}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;\mathbf{xy}^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and consists of rows which are &amp;lt;math&amp;gt;\mathbf{y}^{T}&amp;lt;/math&amp;gt; multiplied by an element of the &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Another interesting and useful example involves a vector with every element equal to &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Sometimes this is written as &amp;lt;math&amp;gt;\mathbf{1}_{n}&amp;lt;/math&amp;gt; to indicate an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, and is called the &amp;#039;&amp;#039;sum vector&amp;#039;&amp;#039;. Why? Consider the impact of &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; on the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; used above: &amp;lt;math&amp;gt;\mathbf{1}_{2}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=9,&amp;lt;/math&amp;gt; i.e. an inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with the sum vector is the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; Dividing through by the number of elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; produces the average of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; - i.e. the ’sample mean’ of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The outer product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; is also interesting:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{1}_{2}\mathbf{x}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
6 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x1}_{2}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 6\\&lt;br /&gt;
3 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
showing that pre multiplication of an &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as rows of the product, whilst post multiplication of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}^{T}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as the columns of the product.&lt;br /&gt;
&lt;br /&gt;
Finally: &amp;lt;math&amp;gt;\mathbf{1}_{n}\mathbf{1}_{n}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix with every element equal to &amp;lt;math&amp;gt;1.&amp;lt;/math&amp;gt; This type of matrix also appears in econometrics!&lt;br /&gt;
&lt;br /&gt;
== Triangular matrices ==&lt;br /&gt;
&lt;br /&gt;
A square &amp;#039;&amp;#039;lower triangular &amp;#039;&amp;#039;matrix has all elements above the main diagonal equal to zero, whilst a square &amp;#039;&amp;#039;upper triangular &amp;#039;&amp;#039;matrix has all elements below the main diagonal equal to zero. A simple example of a lower triangular matrix is: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; 0\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Clearly, for this matrix, &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an upper triangular matrix.&lt;br /&gt;
&lt;br /&gt;
One can adapt the definition to rectangular matrices: for example, if two arbitrary rows are added to &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; so that it becomes &amp;lt;math&amp;gt;5\times3,&amp;lt;/math&amp;gt; it would still be considered lower triangular. Equally, if, for example, the third column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; above is removed, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is still considered lower triangular.&lt;br /&gt;
&lt;br /&gt;
Often, we use &amp;#039;&amp;#039;unit &amp;#039;&amp;#039;triangular matrices, where the diagonal elements are all equal to &amp;lt;math&amp;gt;1:&amp;lt;/math&amp;gt; e.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 1\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right].\label{eq:lt_matrix}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Partitioned matrices ==&lt;br /&gt;
&lt;br /&gt;
Sometimes, especially with big matrices, it is useful to organise the elements of the matrix into components which are themselves matrices, for example: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; 3 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 7 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 6 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; Here it would be reasonable to write: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
B_{11} &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; B_{22}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;B_{ii},i=1,2,&amp;lt;/math&amp;gt; represent &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrices. &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is an example of a &amp;#039;&amp;#039;partitioned matrix&amp;#039;&amp;#039;: that is, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; say: &amp;lt;math&amp;gt;A=\left\Vert a_{ij}\right\Vert ,&amp;lt;/math&amp;gt; where the elements of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; are organised into &amp;#039;&amp;#039;sub-matrices&amp;#039;&amp;#039;. An example might be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right],\label{eq:partition_a}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;#039;&amp;#039;sub - matrices&amp;#039;&amp;#039; in the first row block have &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; rows, and therefore &amp;lt;math&amp;gt;m-r&amp;lt;/math&amp;gt; rows in the second row block. The column blocks might be defined by (for example) 3 columns in the first column block, 4 in the second and &amp;lt;math&amp;gt;n-7&amp;lt;/math&amp;gt; in the third column block.&lt;br /&gt;
&lt;br /&gt;
Another simple example might be: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{1} &amp;amp; A_{2} &amp;amp; A_{3}\end{array}\right],\ \ \ \ \ \mathbf{x=}\left[\begin{array}{c}&lt;br /&gt;
\mathbf{x}_{1}\\&lt;br /&gt;
\mathbf{x}_{2}\\&lt;br /&gt;
\mathbf{x}_{3}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and therefore &amp;lt;math&amp;gt;A_{1},A_{2},A_{3}&amp;lt;/math&amp;gt; have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows, &amp;lt;math&amp;gt;A_{1}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{2}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{3}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; columns. The &amp;#039;&amp;#039;subvectors&amp;#039;&amp;#039; in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n_{1},n_{2}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; rows respectively, for the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; to exist.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;n_{1}+n_{2}+n_{3}=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\sum_{i=1}^{n}a_{ij}x_{j}&amp;lt;/math&amp;gt; but the summation can be broken up into the first &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=1}^{n_{1}}a_{ij}x_{j},&amp;lt;/math&amp;gt; the next &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=n_{1}+1}^{n_{1}+n_{2}}a_{ij}x_{j},&amp;lt;/math&amp;gt; and the next &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; terms; &amp;lt;math&amp;gt;\sum_{i=n_{1}+n_{2}+1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The point about the use of partitioned matrices is that the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; can be represented as: &amp;lt;math&amp;gt;A\mathbf{x}=A_{1}\mathbf{x}_{1}+A_{2}\mathbf{x}_{2}+A\mathbf{x}_{3}&amp;lt;/math&amp;gt; by applying the across and down rule to the submatrices and the subvectors, a much simpler representation than the use of summations.&lt;br /&gt;
&lt;br /&gt;
Each of the components is a conformable matrix-vector product: this is essential in any use of partitioned matrices to represent some matrix product. For example, using &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; from equation (8) and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;B=\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is easy to write:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
A_{11}B_{11}+A_{12}B_{21}+A_{13}B_{31}\\&lt;br /&gt;
A_{21}B_{11}+A_{22}B_{21}+A_{23}B_{31}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
But, what are the row dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt; What are the possible column dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Matrices, vectors and econometrics =&lt;br /&gt;
&lt;br /&gt;
The data on weights and heights for 12 students in the data matrix: &amp;lt;math&amp;gt;D=\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; would seem to be ideally suited for fitting a two variable regression model: &amp;lt;math&amp;gt;y_{i}=\alpha+\beta x_{i}+u_{i},\;\;\;\;\; i=1,...,12.&amp;lt;/math&amp;gt; Here, the first column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the weight data, the data on the dependent variable &amp;lt;math&amp;gt;y_{i},&amp;lt;/math&amp;gt; and so should be labelled &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; The second column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the data on the explanatory variable height, in the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; say, so that: &amp;lt;math&amp;gt;D=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{y} &amp;amp; \mathbf{x}\end{array}\right].&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we define a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector with every element &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}_{12}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{u}&amp;lt;/math&amp;gt; to contain the error terms; &amp;lt;math&amp;gt;\mathbf{u}=\left[\begin{array}{c}&lt;br /&gt;
u_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
u_{12}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; the regression model can be written in terms of the three data vectors &amp;lt;math&amp;gt;\mathbf{y,1}_{12}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\mathbf{y}=\mathbf{1}_{12}\alpha+\mathbf{x}\beta+\mathbf{u.}&amp;lt;/math&amp;gt; To see this, think of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th elements of the vectors on the left and right hand sides.&lt;br /&gt;
&lt;br /&gt;
The standard next step is then to combine the data vectors for the explanatory variables into a matrix: &amp;lt;math&amp;gt;X=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{1}_{12} &amp;amp; \mathbf{x}\end{array}\right],&amp;lt;/math&amp;gt; and then define a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\boldsymbol{\delta}&amp;lt;/math&amp;gt; to contain the parameters &amp;lt;math&amp;gt;\alpha,\beta&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\boldsymbol{\delta}=\left[\begin{array}{r}&lt;br /&gt;
\alpha\\&lt;br /&gt;
\beta&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; to give the data matrix representation of the regression model as: &amp;lt;math&amp;gt;\mathbf{y}=X\boldsymbol{\delta}+\mathbf{u.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the purposes of developing the theory of regression, this is the most convenient form of the regression model. It can represent regression models with any number of explanatory variables, and thus any number of parameters. The obvious point is that a knowledge of vector and matrix operations is needed to use and understand this form.&lt;br /&gt;
&lt;br /&gt;
We shall see later that there are two particular matrix and vector quantities associated with a regression model. The first is the matrix &amp;lt;math&amp;gt;X^{T}X,&amp;lt;/math&amp;gt; and the second the vector &amp;lt;math&amp;gt;X^{T}\mathbf{y.}&amp;lt;/math&amp;gt; The following Matlab code snippet provides the numerical values of these quantities for the weight data:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; dset = load(’weights.mat’); &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xtx = dset.X’ * dset.X; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xty = dset.X’ * dset.y; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xtx) &lt;br /&gt;
&lt;br /&gt;
 12     802&lt;br /&gt;
&lt;br /&gt;
802   53792&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xty)&lt;br /&gt;
&lt;br /&gt;
  1850&lt;br /&gt;
&lt;br /&gt;
124528&lt;br /&gt;
&lt;br /&gt;
Hand calculation is of course possible, but not recommended.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=LNotes&amp;diff=3025</id>
		<title>LNotes</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=LNotes&amp;diff=3025"/>
				<updated>2013-09-10T14:09:38Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Matrices =&lt;br /&gt;
&lt;br /&gt;
In the PreSession Maths course, a matrix was defined as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;A matrix is a rectangular array of numbers enclosed in parentheses, con-&lt;br /&gt;
&lt;br /&gt;
ventionally denoted by a capital letter. The number of rows (say &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt;) and&lt;br /&gt;
&lt;br /&gt;
the number of columns (say &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;) determine the order of the matrix (&amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\times&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;).&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
Two examples were given:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
P &amp;amp; =\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 3 &amp;amp; 4\\&lt;br /&gt;
3 &amp;amp; 1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ Q=\left[\begin{array}{rr}&lt;br /&gt;
2 &amp;amp; 3\\&lt;br /&gt;
4 &amp;amp; 3\\&lt;br /&gt;
1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
matrices of dimensions &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;3\times2&amp;lt;/math&amp;gt; respectively.&lt;br /&gt;
&lt;br /&gt;
Why study matrices for econometrics? Basically because a data set of several variables, e.g. on the weights and heights of 12 students, can be thought of as a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
D &amp;amp; =\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The properties of matrices can then be used to facilitate answering all the usual questions of econometrics - list not given here!&lt;br /&gt;
&lt;br /&gt;
Calculations with matrices with explicit numerical elements, as in the examples above is called matrix &amp;#039;&amp;#039;arithmetic&amp;#039;&amp;#039;. Matrix &amp;#039;&amp;#039;algebra&amp;#039;&amp;#039; is the algebra of matrices where the elements are not made explicit: this is what is really required for econometrics, as we shall see.&lt;br /&gt;
&lt;br /&gt;
As an example of this, a &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix might be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{ccc}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and would equal &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; above if the collection of &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; were given appropriate numerical values.&lt;br /&gt;
&lt;br /&gt;
A general &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is also a &amp;#039;&amp;#039;typical element &amp;#039;&amp;#039;notation for matrices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left\Vert a_{ij}\right\Vert ,\ \ \ \ \ i=1,...,m,j=1,...,n,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; is the element at the intersection of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row and &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th column in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;m\neq n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;#039;&amp;#039;rectangular &amp;#039;&amp;#039;matrix; when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a square matrix, having the same number of rows or columns.&lt;br /&gt;
&lt;br /&gt;
== Rows, columns and vectors ==&lt;br /&gt;
&lt;br /&gt;
Clearly, there is no reason why &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; cannot equal 1: so, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix with &amp;lt;math&amp;gt;n=1,&amp;lt;/math&amp;gt; i.e. with one column, is usually called a column vector. Similarly, a matrix with one row is a row vector.&lt;br /&gt;
&lt;br /&gt;
There are a lot of advantages to thinking of matrices as collections of row or column vectors, as we shall see. As an example, define the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; column vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\mathbf{,\ \ \ b}=\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and arrange as the columns of the &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\label{eq:axy}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, a column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; elements can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What happens when both &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; are equal to &amp;lt;math&amp;gt;1?&amp;lt;/math&amp;gt; Then, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, but it is also considered to be a real number, or &amp;#039;&amp;#039;scalar&amp;#039;&amp;#039; in the language of linear algebra:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[a_{11}\right]=a_{11}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is perhaps a little odd, but turns out to be a useful convention in a number of situations.&lt;br /&gt;
&lt;br /&gt;
== Transposition of vectors ==&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; in equation (1) can be seen as elements of column vectors, say:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],\ \ \ \boldsymbol{d}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This representation of row vectors as column vectors is a bit clumsy, so some transformation which converts a column vector into a row vector, and vice versa would be useful. The process of converting a column vector &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; into a row vector is called &amp;#039;&amp;#039;transposition, &amp;#039;&amp;#039;and the transposed version of &amp;lt;math&amp;gt;\mathbf{c}&amp;lt;/math&amp;gt; is denoted:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c}^{T} &amp;amp; =\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; superscript denoting transposition. In practice, a prime, &amp;lt;math&amp;gt;^{\prime},&amp;lt;/math&amp;gt; is used instead of &amp;lt;math&amp;gt;^{T}.&amp;lt;/math&amp;gt; However, whilst the prime is much simpler to write than the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; sign, it is also much easier to lose track of in writing out long or complicated expressions. So, it is best initially to use &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; to denote transposition rather than the prime &amp;lt;math&amp;gt;^{\prime}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can then be written via its rows as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
\mathbf{c}^{T}\\&lt;br /&gt;
\boldsymbol{d}^{T}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The same ideas can be applied to the matrices &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Q.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Operations with matrices =&lt;br /&gt;
&lt;br /&gt;
== Addition, subtraction and scalar multiplication ==&lt;br /&gt;
&lt;br /&gt;
For vectors, addition and subtraction are defined only for vectors of the same dimensions. If:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
y_{n}&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x+y} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}+y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}+y_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{x-y}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}-y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}-y_{n}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clearly, the addition or subtraction operation is &amp;#039;&amp;#039;elementwise. &amp;#039;&amp;#039;If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; have different dimensions, there will be some elements left over once all the elements of the smaller dimensioned vector have been used up.&lt;br /&gt;
&lt;br /&gt;
Another operation is &amp;#039;&amp;#039;scalar multiplication&amp;#039;&amp;#039;: if &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; is a real number or scalar, the product &amp;lt;math&amp;gt;\lambda\mathbf{x}&amp;lt;/math&amp;gt; is defined as: &amp;lt;math&amp;gt;\lambda\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that every element of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is multiplied by the same scalar &amp;lt;math&amp;gt;\lambda.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The two types of operation can be combined into the &amp;#039;&amp;#039;linear combination&amp;#039;&amp;#039; of vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right]+\left[\begin{array}{c}&lt;br /&gt;
\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mu y_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}+\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}+\mu y_{n}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equally, one can define the linear combination of vectors &amp;lt;math&amp;gt;\mathbf{x,y,}\ldots,\mathbf{z}&amp;lt;/math&amp;gt; by scalars &amp;lt;math&amp;gt;\lambda,\mu,\ldots,\nu&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}+\ldots+\nu\mathbf{z}&amp;lt;/math&amp;gt; with typical element: &amp;lt;math&amp;gt;\lambda x_{i}+\mu y_{i}+\ldots+\nu z_{i},&amp;lt;/math&amp;gt; provided that all the vectors have the same dimension.&lt;br /&gt;
&lt;br /&gt;
For matrices, these ideas carry over immediately: apply to each column of the matrices involved. For example, if &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{b}_{n}\end{array}\right],&amp;lt;/math&amp;gt; both &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; then addition and subtraction are defined elementwise, as for vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A+B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}+\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}+\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}+b_{ij}\right\Vert ,\\&lt;br /&gt;
A-B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}-\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}-\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}-b_{ij}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scalar multiplication of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; involves multiplying every column vector of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda,&amp;lt;/math&amp;gt; and therefore multiplying every element of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda A=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}\right\Vert .&amp;lt;/math&amp;gt; With the same idea for &amp;lt;math&amp;gt;B,&amp;lt;/math&amp;gt; the linear combination of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mu&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\lambda A+\mu B=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1}+\mu\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}+\mu\mathbf{b}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}+\mu b_{ij}\right\Vert .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, consider the matrices: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\lambda=1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mu=-2:&amp;lt;/math&amp;gt; then:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\lambda A+\mu B &amp;amp; = &amp;amp; A-2B\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
4 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; 7&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - vector products ==&lt;br /&gt;
&lt;br /&gt;
=== Inner product ===&lt;br /&gt;
&lt;br /&gt;
The simplest form of a matrix vector product is the case where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; consists of one row, so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;1\times n&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\mathbf{a}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right].&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the product &amp;lt;math&amp;gt;A\mathbf{x}=\mathbf{a}^{T}\mathbf{x}&amp;lt;/math&amp;gt; is called the &amp;#039;&amp;#039;inner product&amp;#039;&amp;#039; and is defined as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a}^{T}\mathbf{x} &amp;amp; =a_{1}x_{1}+\ldots+a_{n}x_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that the definition amounts to multiplying corresponding elements in &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and adding up the resultant products. Writing: &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x=}\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=a_{1}x_{1}+\ldots+a_{n}x_{n}&amp;lt;/math&amp;gt; motivates the familiar description of the &amp;#039;&amp;#039;across and down rule &amp;#039;&amp;#039;for this product: &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; is the ’multiply corresponding elements’ part of the definition.&lt;br /&gt;
&lt;br /&gt;
Notice that the result of the inner product is a real number, for example: &amp;lt;math&amp;gt;\mathbf{c}^{T}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{c}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=36+6=42.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, in the product &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x,}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have the same number of elements, &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; say, for the product to be defined. If &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; had different numbers of elements, there would be some elements of &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; left over or not used in the product: e.g.: &amp;lt;math&amp;gt;\mathbf{b}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{x=}\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; When the inner product of two vectors is defined, the vectors are said to be &amp;#039;&amp;#039;conformable&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Orthogonality ==&lt;br /&gt;
&lt;br /&gt;
Two vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; with the property that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0&amp;lt;/math&amp;gt; are said to be orthogonal to each other. For example, if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
-1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is clear that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0.&amp;lt;/math&amp;gt; This seems a rather innocuous definition, and yet the idea of orthogonality turns out to be extremely important in econometrics.&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; are thought of as points in &amp;lt;math&amp;gt;R^{2},&amp;lt;/math&amp;gt; and arrows are drawn from the origin to &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and to &amp;lt;math&amp;gt;\mathbf{y,}&amp;lt;/math&amp;gt; then the two arrows are perpendicular to each other - see Figure [orthy&amp;lt;sub&amp;gt;e&amp;lt;/sub&amp;gt;xample]. If &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; were defined as: &amp;lt;math&amp;gt;\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the position of the &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; vector and the corresponding arrow would change, but the perpendicularity property would still hold.&lt;br /&gt;
&lt;br /&gt;
Figure 1:&lt;br /&gt;
[[Media:orthy_example.png]]&lt;br /&gt;
[[File:orthy_example.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Matrix - vector products ===&lt;br /&gt;
&lt;br /&gt;
Since the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; has two rows, now denoted &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{1}^{T}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{2}^{T},&amp;lt;/math&amp;gt; there are two possible inner products with the vector:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]:\\&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x} &amp;amp; = &amp;amp; 42,\ \ \ \ \ \boldsymbol{\alpha}_{2}^{T}\mathbf{x}=33.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assembling the two inner product values into a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector defines the product of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; with the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x}\\&lt;br /&gt;
\boldsymbol{\alpha}_{2}^{T}\mathbf{x}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Focussing only on the part: &amp;lt;math&amp;gt;\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; one can see that each element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is obtained from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; argument.&lt;br /&gt;
&lt;br /&gt;
Sometimes this product is described as forming a &amp;#039;&amp;#039;linear combination &amp;#039;&amp;#039;of the columns of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; using the scalar elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=6\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]+3\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; More generally, if:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right],\ \ \ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
\lambda\\&lt;br /&gt;
\mu&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
A\mathbf{x} &amp;amp; = &amp;amp; \lambda\mathbf{a}+\mu\mathbf{b.}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The general version of these ideas for an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \mathbf{a}_{2} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right].&amp;lt;/math&amp;gt; is straightforward. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, then the vector &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is, by the &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
a_{11}x_{1}+\ldots+a_{1n}x_{n}\\&lt;br /&gt;
a_{21}x_{1}+\ldots+a_{2n}x_{n}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{m1}x_{1}+\ldots+a_{mn}x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{1j}x_{j}\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{2j}x_{j}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{mj}x_{j}&lt;br /&gt;
\end{array}\right],\label{eq:ab}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that the typical element, the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th, is &amp;lt;math&amp;gt;\sum\limits _{j=1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt; Equally, &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is the linear combination &amp;lt;math&amp;gt;\mathbf{a}_{1}x_{1}+\ldots+\mathbf{a}_{n}x_{n}&amp;lt;/math&amp;gt; of the columns of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - matrix products ==&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{a}_{1},\ldots,\mathbf{a}_{n},&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{b}_{1},\ldots,\mathbf{b}_{r}.&amp;lt;/math&amp;gt; Clearly, each product &amp;lt;math&amp;gt;A\mathbf{b}_{1},...,A\mathbf{b}_{r}&amp;lt;/math&amp;gt; exists, and is &amp;lt;math&amp;gt;m\times1.&amp;lt;/math&amp;gt; These products can be arranged as the columns of a matrix as &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]&amp;lt;/math&amp;gt; and this matrix is &amp;#039;&amp;#039;defined&amp;#039;&amp;#039; to be the product &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; of the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]=AB.&amp;lt;/math&amp;gt; By construction, this must be an &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix, since each column is &amp;lt;math&amp;gt;m\times1&amp;lt;/math&amp;gt; and there are &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; columns.&lt;br /&gt;
&lt;br /&gt;
This is not the usual presentation of the definition of the product of two matrices, which relies on the &amp;#039;&amp;#039;across and down rule&amp;#039;&amp;#039; mentioned earlier, and focusses on the elements of each matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt; Set:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \mathbf{b}_{2} &amp;amp; \ldots &amp;amp; \mathbf{b}_{r}\end{array}\right]\text{\ \ \ \ \ \ \ (by columns)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert b_{ik}\right\Vert ,\ \ \ \ \ i=1,...,n,k=1,...,r\text{ \ \ \ \ \ \ (typical element)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\text{\ \ \ \ \ \ \ (the array)}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What does the typical element of the &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; look like? Start with the &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; which is &amp;lt;math&amp;gt;A\mathbf{b}_{k}.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element in &amp;lt;math&amp;gt;A\mathbf{b}_{k}&amp;lt;/math&amp;gt; is, from equation (2), the inner product of the elements of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\end{array}\right],&amp;lt;/math&amp;gt; with the elements of &amp;lt;math&amp;gt;\mathbf{b}_{k},&amp;lt;/math&amp;gt; so that the inner product is: &amp;lt;math&amp;gt;a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, the &amp;lt;math&amp;gt;ik&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;c_{ik}=a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt; We can see this arising from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; calculation by writing:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\label{eq:c_ab}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1k} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2k} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nk} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert \sum_{j=1}^{n}a_{ij}b_{jk}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These ideas are simple, but a little tedious. Numerical examples are equally tedious! As an example, using: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; we can find the matrix &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; such that&lt;br /&gt;
&lt;br /&gt;
# the first column of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; adds together the columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the second column is the difference of the first and second columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the third column is &amp;lt;math&amp;gt;2\times&amp;lt;/math&amp;gt; the first column of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the fourth column is zero.&lt;br /&gt;
&lt;br /&gt;
It is easy to check that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cccc}&lt;br /&gt;
8 &amp;amp; 4 &amp;amp; 12 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; -2 &amp;amp; 6 &amp;amp; 0&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Arithmetic calculations of matrix products almost always use the elementwise across and down formula. However, there are many situations in econometrics where algebraic rather than arithmetic arguments are required. In these cases, the viewpoint of matrix multiplication as linear combinations of columns is much more powerful.&lt;br /&gt;
&lt;br /&gt;
Clearly one can give many more examples of different dimensions and complexities - but the same basic rules apply. To multiply two matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; together, the number of columns in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; must match the number of rows in &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; - this is &amp;#039;&amp;#039;conformability&amp;#039;&amp;#039; in action again. The resulting product will have number of rows equal to the number in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and number of columns equal to the number in &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this conformability rule does not hold, then the product of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is not defined.&lt;br /&gt;
&lt;br /&gt;
== Matlab ==&lt;br /&gt;
&lt;br /&gt;
One should also say that as the dimensions of the matrices increases, so the tediousness of the calculations increase. The solution to this for numerical calculation is to appeal to the computer. Programs like Matlab and Excel (and a number of others, some of them free) resolve this difficulty easily.&lt;br /&gt;
&lt;br /&gt;
In Matlab, symbols for row or column vectors do not need any particular differentiation: they are distinguished by how they are defined. For example, the following Matlab commands define &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;as a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; as a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector, then display the contents of these variables, and do a calculation:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec = [1 2 3 4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec = [1;2;3;4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec&lt;br /&gt;
&lt;br /&gt;
rowvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec&lt;br /&gt;
&lt;br /&gt;
colvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec*colvec&lt;br /&gt;
&lt;br /&gt;
ans =&lt;br /&gt;
&lt;br /&gt;
30 &lt;br /&gt;
&lt;br /&gt;
So, the semi-colon indicates the end of a row in a matrix or vector; it can be replaced by a carriage return. Notice the difference in how a row vector and a column vector is defined. One can see that the product &amp;lt;code&amp;gt;rowvec*colvec&amp;lt;/code&amp;gt; is well defined, just because &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Matlab also allows elementwise multiplication of two vectors using the &amp;lt;math&amp;gt;\centerdot\ast&amp;lt;/math&amp;gt; operator: if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
y_{2}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{x}\centerdot\ast\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}y_{1}\\&lt;br /&gt;
x_{2}y_{2}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and one can see that the inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; can be obtained as the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; In Matlab, this would be obtained as: &amp;lt;math&amp;gt;\text{sum}\left(\mathbf{x}\centerdot\ast\mathbf{y}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the example above, this calculation fails since &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; sum(rowvec .* colvec) ??? &lt;br /&gt;
&lt;br /&gt;
Error using ==&amp;amp;gt; times Matrix dimensions must agree. &lt;br /&gt;
&lt;br /&gt;
For this to work, &amp;lt;code&amp;gt;rowvec&amp;lt;/code&amp;gt; would have to be transposed as &amp;lt;code&amp;gt;rowvec’&amp;lt;/code&amp;gt;, so that transposition in Matlab is very natural.&lt;br /&gt;
&lt;br /&gt;
Allowing for such difficulties, matrix multiplication in Matlab is very simple:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; A = [6 2; 3 5];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; B = [1 1 2 0;1 -1 0 0];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = A * B; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
 8 4 1&lt;br /&gt;
&lt;br /&gt;
 2 0 8 &lt;br /&gt;
&lt;br /&gt;
-2 6 0 &lt;br /&gt;
&lt;br /&gt;
Notice how the matrices are defined here through their rows. The &amp;lt;code&amp;gt;disp() &amp;lt;/code&amp;gt;command displays the contents of the object referred to.&lt;br /&gt;
&lt;br /&gt;
It is less natural in Matlab to define matrices by columns - a typical example of how mathematics and computing have conflicts of notation. However, once columns &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}&amp;lt;/math&amp;gt; have been defined, the concatenation operation &amp;lt;math&amp;gt;\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]&amp;lt;/math&amp;gt; collects the columns into a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; a = [6;2]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; b = [3;5]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = [a b]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
6 3 &lt;br /&gt;
&lt;br /&gt;
2 5 &lt;br /&gt;
&lt;br /&gt;
Notice that the &amp;lt;code&amp;gt;disp(C)&amp;lt;/code&amp;gt; command does not label the result that is printed out. Simply typing &amp;lt;code&amp;gt;C&amp;lt;/code&amp;gt; would preface the output by &amp;lt;code&amp;gt;C =&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Pre and Post Multiplication ==&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; as above, say that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;pre-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; and that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;post-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This distinction between &amp;#039;&amp;#039;pre &amp;#039;&amp;#039;and &amp;#039;&amp;#039;post &amp;#039;&amp;#039;multiplication is important, in the following sense. Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are matrices such that the products &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined. If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; rows for &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; to be defined. For &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; to be defined, &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; columns to match the &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when both products are defined, there is no reason for the two products coincide. The first thing to notice is that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;m\times m,&amp;lt;/math&amp;gt; matrix, whilst &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; matrix. Different sized matrices cannot be equal. To illustrate, use the matrices: &amp;lt;math&amp;gt;B_{2}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right],\ \ \ C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]:&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B_{2}C &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrr}&lt;br /&gt;
27 &amp;amp; -3 &amp;amp; -15\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-15 &amp;amp; -1 &amp;amp; 8&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
CB_{2} &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
49 &amp;amp; -11\\&lt;br /&gt;
31 &amp;amp; 15&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; matrices, the products can differ: for example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
8 &amp;amp; 4\\&lt;br /&gt;
8 &amp;amp; -2&lt;br /&gt;
\end{array}\right],\ \ \ \ \ BA=\left[\begin{array}{cc}&lt;br /&gt;
9 &amp;amp; 7\\&lt;br /&gt;
3 &amp;amp; -3&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In cases where &amp;lt;math&amp;gt;AB=BA,&amp;lt;/math&amp;gt; the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are said to &amp;#039;&amp;#039;commute&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Transposition ==&lt;br /&gt;
&lt;br /&gt;
A column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; can be converted to a row vector &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by transposition: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ \mathbf{x}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
x_{1} &amp;amp; \ldots &amp;amp; x_{n}\end{array}\right].&amp;lt;/math&amp;gt; Transposing &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;\left(\mathbf{x}^{T}\right)^{T}&amp;lt;/math&amp;gt; reproduces the original vector &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; How do these ideas carry over to matrices?&lt;br /&gt;
&lt;br /&gt;
If the &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right],&amp;lt;/math&amp;gt; the transpose of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; is defined as the matrix whose &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; are &amp;lt;math&amp;gt;\mathbf{a}_{i}^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{c}&lt;br /&gt;
\mathbf{a}_{1}^{T}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mathbf{a}_{n}^{T}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; In terms of elements, if: &amp;lt;math&amp;gt;\mathbf{a}_{i}=\left[\begin{array}{c}&lt;br /&gt;
a_{1i}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{ni}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ A^{T}=\left[\begin{array}{rrrrr}&lt;br /&gt;
a_{11} &amp;amp; \ldots &amp;amp; a_{i1} &amp;amp; \ldots &amp;amp; a_{m1}\\&lt;br /&gt;
a_{12} &amp;amp; \ldots &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{m2}\\&lt;br /&gt;
\vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{1n} &amp;amp; \ldots &amp;amp; a_{in} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; One can see that the first column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; has now become the first row of &amp;lt;math&amp;gt;A^{T}.&amp;lt;/math&amp;gt; Notice too that &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times m&amp;lt;/math&amp;gt; matrix if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix.&lt;br /&gt;
&lt;br /&gt;
Transposing &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; takes the first column of &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; and writes it as a row, which coincides with the first row of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; The same argument applies to the other columns of &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\left(A^{T}\right)^{T}=A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== The product rule for transposition ===&lt;br /&gt;
&lt;br /&gt;
This states that if &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;C^{T}=B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How to see this? Consider the following example: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; b_{13} &amp;amp; b_{14}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; b_{23} &amp;amp; b_{24}\\&lt;br /&gt;
b_{31} &amp;amp; b_{32} &amp;amp; b_{33} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;c_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=\sum_{k=1}^{3}a_{2k}b_{k3}.\label{eq:c23}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that: &amp;lt;math&amp;gt;B^{T}A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
b_{11} &amp;amp; b_{21} &amp;amp; b_{31}\\&lt;br /&gt;
b_{12} &amp;amp; b_{22} &amp;amp; b_{32}\\&lt;br /&gt;
b_{13} &amp;amp; b_{23} &amp;amp; b_{33}\\&lt;br /&gt;
b_{14} &amp;amp; b_{24} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
a_{11} &amp;amp; a_{21}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that the &amp;lt;math&amp;gt;\left(3,2\right)&amp;lt;/math&amp;gt; element of this product is actually &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;b_{13}a_{21}+b_{23}a_{22}+b_{33}a_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=c_{23}.&amp;lt;/math&amp;gt; In summation notation, we see that from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;c_{23}=\sum_{k=1}^{3}b_{k3}a_{2k},&amp;lt;/math&amp;gt; where the position of the index of summation is due to the transposition. So, in summation notation, the calculation of &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; equals that from equation (6).&lt;br /&gt;
&lt;br /&gt;
More generally, the &amp;lt;math&amp;gt;\left(i,j\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\sum_{k=1}^{3}a_{ik}b_{kj}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;\left(j,i\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt; But this means that &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; must be the transpose of &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; since the elements in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; are being written in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This &amp;#039;&amp;#039;Product Rule for Transposition&amp;#039;&amp;#039; can be applied again to find the transpose &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;C^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}=\left(B^{T}A^{T}\right)^{T}=\left(A^{T}\right)^{T}\left(B^{T}\right)^{T}=AB=C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Special Types of Matrix =&lt;br /&gt;
&lt;br /&gt;
== The zero matrix ==&lt;br /&gt;
&lt;br /&gt;
The most obvious special type of matrix is one whose elements are all zeros. In typical element notation, the zero matrix is: &amp;lt;math&amp;gt;0=\left\Vert 0\right\Vert .&amp;lt;/math&amp;gt; Since there is no indexing on the elements, it is not obvious what the dimension of this matrix is, Sometimes one writes &amp;lt;math&amp;gt;0_{mn}&amp;lt;/math&amp;gt; to indicate a zero matrix of dimension &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The same ideas apply to vectors whose elements are all zero.&lt;br /&gt;
&lt;br /&gt;
The effect of the zero matrix in any product that is defined is simple: &amp;lt;math&amp;gt;0A=0,\ \ \ \ \ B0=0.&amp;lt;/math&amp;gt; This is easy to check using the across and down rule.&lt;br /&gt;
&lt;br /&gt;
== The identity or unit matrix ==&lt;br /&gt;
&lt;br /&gt;
Vectors of the form:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }2\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }3\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ldots,\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }n\ \text{dimensions}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
are called coordinate vectors. They are often given a characteristic notation, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; dimensions. When arranged as columns of a matrix in the natural order, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; a matrix with a characteristic pattern elements emerges, with a special notation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{2}\\&lt;br /&gt;
\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \mathbf{e}_{3}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{3}\\&lt;br /&gt;
\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \ldots &amp;amp; \mathbf{e}_{n}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;diagonal&amp;#039;&amp;#039; of this matrix is where the 1 elements are located, and every other element is zero.&lt;br /&gt;
&lt;br /&gt;
Consider the effect of &amp;lt;math&amp;gt;I_{2}&amp;lt;/math&amp;gt; on the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; by both pre and post multiplication:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
I_{2}A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\\&lt;br /&gt;
AI_{2} &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
as is easily checked by the across and down rule.&lt;br /&gt;
&lt;br /&gt;
Because any matrix is left unchanged by pre or post multiplication by an appropriately dimensioned &amp;lt;math&amp;gt;I_{n},&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is called an &amp;#039;&amp;#039;identity matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Sometimes it is called a &amp;#039;&amp;#039;unit matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Notice that &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is necessarily a square matrix.&lt;br /&gt;
&lt;br /&gt;
== Diagonal matrices ==&lt;br /&gt;
&lt;br /&gt;
The identity matrix is an example of a diagonal matrix, a matrix whose elements are all zero except for those on the diagonal. Usually diagonal matrices are taken to be square, for example: &amp;lt;math&amp;gt;D=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; They also produce characteristic effects when pre or post multiplying another matrix.&lt;br /&gt;
&lt;br /&gt;
Consider the diagonal matrix: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and the products &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; as defined in the previous section:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; -4\\&lt;br /&gt;
6 &amp;amp; -10&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
BA &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; 4\\&lt;br /&gt;
-6 &amp;amp; -10&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Comparing the results, we can deduce that post multiplication by a diagonal matrix multiplies each column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by the corresponding diagonal element, whereas pre multiplication multiplies each row by the corresponding diagonal element.&lt;br /&gt;
&lt;br /&gt;
== Symmetric matrices ==&lt;br /&gt;
&lt;br /&gt;
Symmetric matrices are matrices having the property that &amp;lt;math&amp;gt;A=A^{T}.&amp;lt;/math&amp;gt; Notice that such matrices must be square, since if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and to have equality of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; they must have the same dimension, so that &amp;lt;math&amp;gt;m=n&amp;lt;/math&amp;gt; is required.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; symmetric matrix, with typical element &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{21} &amp;amp; a_{31}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22} &amp;amp; a_{32}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equality of matrices is defined as equality of all elements. This is fine on the diagonals, since &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; have the same diagonal elements. For the off diagonal elements, we end up with the requirements: &amp;lt;math&amp;gt;a_{12}=a_{21},\ \ \ a_{13}=a_{31},\ \ \ a_{23}=a_{32}&amp;lt;/math&amp;gt; or more generally: &amp;lt;math&amp;gt;a_{ij}=a_{ji}\ \ \ \ \ \text{for}\ i\neq j.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The effect of this conclusion is that in a symmetric matrix, the ’triangle’ of above diagonal elements coincides with the triangle of below diagonal elements. It is as if the upper triangle is folded over the diagonal to become the lower triangle.&lt;br /&gt;
&lt;br /&gt;
A simple example is: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 2\\&lt;br /&gt;
2 &amp;amp; 1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; A more complicated example uses the &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and calculates the &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C^{T}C &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
45 &amp;amp; 27 &amp;amp; -21\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-21 &amp;amp; -11 &amp;amp; 10&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is clearly symmetric.&lt;br /&gt;
&lt;br /&gt;
This illustrates the general proposition that if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix, the product &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is a symmetric &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix. Proof? Compute the transpose of &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; using the product rule for transposition: &amp;lt;math&amp;gt;\left(A^{T}A\right)^{T}=A^{T}\left(A^{T}\right)^{T}=A^{T}A.&amp;lt;/math&amp;gt; Since &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is equal to its transpose, it must be a symmetric matrix. Such symmetric matrices appear frequently in econometrics.&lt;br /&gt;
&lt;br /&gt;
It should be clear that diagonal matrices are symmetric, since all their off-diagonal elements are equal (zero), and thence the identity matrix &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is also symmetric.&lt;br /&gt;
&lt;br /&gt;
== The outer product ==&lt;br /&gt;
&lt;br /&gt;
The inner product of two &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}&amp;lt;/math&amp;gt;, is automatically a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; quantity, a scalar, although it can be interpreted as a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, a matrix with a single element.&lt;br /&gt;
&lt;br /&gt;
Suppose one considered the product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{x}^{T}.&amp;lt;/math&amp;gt; Is this defined? If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; then the product &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times r.&amp;lt;/math&amp;gt; Applying this logic to &amp;lt;math&amp;gt;\mathbf{xx}^{T},&amp;lt;/math&amp;gt; this is &amp;lt;math&amp;gt;\left(n\times1\right)\left(1\times n\right),&amp;lt;/math&amp;gt; so the resulting product &amp;#039;&amp;#039;is &amp;#039;&amp;#039;defined, and is an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt;&amp;#039;&amp;#039; matrix&amp;#039;&amp;#039; - the &amp;#039;&amp;#039;outer product&amp;#039;&amp;#039; of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; the word ’outer’ being used to distinguish from the inner product.&lt;br /&gt;
&lt;br /&gt;
How does the across and down rule work here? Suppose that: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Then: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right].&amp;lt;/math&amp;gt; Here, there is &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in row one of the ’matrix’ &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in column one of the matrix &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; so the across and down rule still works - it is just that there is only one product per row and column combination. So: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{cc}&lt;br /&gt;
36 &amp;amp; 18\\&lt;br /&gt;
18 &amp;amp; 9&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and it is obvious from this that &amp;lt;math&amp;gt;\mathbf{xx}^{T}&amp;lt;/math&amp;gt; is a symmetric matrix.&lt;br /&gt;
&lt;br /&gt;
One can see that this outer product need not be restricted to vectors of the same dimension. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times1,&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{xy}^{T}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
y_{1} &amp;amp; \ldots &amp;amp; y_{m}\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
x_{1}y_{1} &amp;amp; x_{1}y_{2} &amp;amp; \ldots &amp;amp; x_{1}y_{m}\\&lt;br /&gt;
x_{2}y_{1} &amp;amp; x_{2}y_{2} &amp;amp; \ldots &amp;amp; x_{2}y_{m}\\&lt;br /&gt;
\\&lt;br /&gt;
x_{n}y_{1} &amp;amp; x_{n}y_{2} &amp;amp; \ldots &amp;amp; x_{n}y_{m}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;\mathbf{xy}^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and consists of rows which are &amp;lt;math&amp;gt;\mathbf{y}^{T}&amp;lt;/math&amp;gt; multiplied by an element of the &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Another interesting and useful example involves a vector with every element equal to &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Sometimes this is written as &amp;lt;math&amp;gt;\mathbf{1}_{n}&amp;lt;/math&amp;gt; to indicate an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, and is called the &amp;#039;&amp;#039;sum vector&amp;#039;&amp;#039;. Why? Consider the impact of &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; on the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; used above: &amp;lt;math&amp;gt;\mathbf{1}_{2}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=9,&amp;lt;/math&amp;gt; i.e. an inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with the sum vector is the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; Dividing through by the number of elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; produces the average of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; - i.e. the ’sample mean’ of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The outer product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; is also interesting:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{1}_{2}\mathbf{x}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
6 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x1}_{2}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 6\\&lt;br /&gt;
3 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
showing that pre multiplication of an &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as rows of the product, whilst post multiplication of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}^{T}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as the columns of the product.&lt;br /&gt;
&lt;br /&gt;
Finally: &amp;lt;math&amp;gt;\mathbf{1}_{n}\mathbf{1}_{n}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix with every element equal to &amp;lt;math&amp;gt;1.&amp;lt;/math&amp;gt; This type of matrix also appears in econometrics!&lt;br /&gt;
&lt;br /&gt;
== Triangular matrices ==&lt;br /&gt;
&lt;br /&gt;
A square &amp;#039;&amp;#039;lower triangular &amp;#039;&amp;#039;matrix has all elements above the main diagonal equal to zero, whilst a square &amp;#039;&amp;#039;upper triangular &amp;#039;&amp;#039;matrix has all elements below the main diagonal equal to zero. A simple example of a lower triangular matrix is: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; 0\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Clearly, for this matrix, &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an upper triangular matrix.&lt;br /&gt;
&lt;br /&gt;
One can adapt the definition to rectangular matrices: for example, if two arbitrary rows are added to &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; so that it becomes &amp;lt;math&amp;gt;5\times3,&amp;lt;/math&amp;gt; it would still be considered lower triangular. Equally, if, for example, the third column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; above is removed, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is still considered lower triangular.&lt;br /&gt;
&lt;br /&gt;
Often, we use &amp;#039;&amp;#039;unit &amp;#039;&amp;#039;triangular matrices, where the diagonal elements are all equal to &amp;lt;math&amp;gt;1:&amp;lt;/math&amp;gt; e.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 1\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right].\label{eq:lt_matrix}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Partitioned matrices ==&lt;br /&gt;
&lt;br /&gt;
Sometimes, especially with big matrices, it is useful to organise the elements of the matrix into components which are themselves matrices, for example: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; 3 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 7 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 6 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; Here it would be reasonable to write: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
B_{11} &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; B_{22}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;B_{ii},i=1,2,&amp;lt;/math&amp;gt; represent &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrices. &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is an example of a &amp;#039;&amp;#039;partitioned matrix&amp;#039;&amp;#039;: that is, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; say: &amp;lt;math&amp;gt;A=\left\Vert a_{ij}\right\Vert ,&amp;lt;/math&amp;gt; where the elements of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; are organised into &amp;#039;&amp;#039;sub-matrices&amp;#039;&amp;#039;. An example might be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right],\label{eq:partition_a}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;#039;&amp;#039;sub - matrices&amp;#039;&amp;#039; in the first row block have &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; rows, and therefore &amp;lt;math&amp;gt;m-r&amp;lt;/math&amp;gt; rows in the second row block. The column blocks might be defined by (for example) 3 columns in the first column block, 4 in the second and &amp;lt;math&amp;gt;n-7&amp;lt;/math&amp;gt; in the third column block.&lt;br /&gt;
&lt;br /&gt;
Another simple example might be: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{1} &amp;amp; A_{2} &amp;amp; A_{3}\end{array}\right],\ \ \ \ \ \mathbf{x=}\left[\begin{array}{c}&lt;br /&gt;
\mathbf{x}_{1}\\&lt;br /&gt;
\mathbf{x}_{2}\\&lt;br /&gt;
\mathbf{x}_{3}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and therefore &amp;lt;math&amp;gt;A_{1},A_{2},A_{3}&amp;lt;/math&amp;gt; have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows, &amp;lt;math&amp;gt;A_{1}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{2}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{3}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; columns. The &amp;#039;&amp;#039;subvectors&amp;#039;&amp;#039; in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n_{1},n_{2}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; rows respectively, for the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; to exist.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;n_{1}+n_{2}+n_{3}=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\sum_{i=1}^{n}a_{ij}x_{j}&amp;lt;/math&amp;gt; but the summation can be broken up into the first &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=1}^{n_{1}}a_{ij}x_{j},&amp;lt;/math&amp;gt; the next &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=n_{1}+1}^{n_{1}+n_{2}}a_{ij}x_{j},&amp;lt;/math&amp;gt; and the next &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; terms; &amp;lt;math&amp;gt;\sum_{i=n_{1}+n_{2}+1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The point about the use of partitioned matrices is that the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; can be represented as: &amp;lt;math&amp;gt;A\mathbf{x}=A_{1}\mathbf{x}_{1}+A_{2}\mathbf{x}_{2}+A\mathbf{x}_{3}&amp;lt;/math&amp;gt; by applying the across and down rule to the submatrices and the subvectors, a much simpler representation than the use of summations.&lt;br /&gt;
&lt;br /&gt;
Each of the components is a conformable matrix-vector product: this is essential in any use of partitioned matrices to represent some matrix product. For example, using &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; from equation (8) and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;B=\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is easy to write:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
A_{11}B_{11}+A_{12}B_{21}+A_{13}B_{31}\\&lt;br /&gt;
A_{21}B_{11}+A_{22}B_{21}+A_{23}B_{31}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
But, what are the row dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt; What are the possible column dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Matrices, vectors and econometrics =&lt;br /&gt;
&lt;br /&gt;
The data on weights and heights for 12 students in the data matrix: &amp;lt;math&amp;gt;D=\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; would seem to be ideally suited for fitting a two variable regression model: &amp;lt;math&amp;gt;y_{i}=\alpha+\beta x_{i}+u_{i},\;\;\;\;\; i=1,...,12.&amp;lt;/math&amp;gt; Here, the first column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the weight data, the data on the dependent variable &amp;lt;math&amp;gt;y_{i},&amp;lt;/math&amp;gt; and so should be labelled &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; The second column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the data on the explanatory variable height, in the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; say, so that: &amp;lt;math&amp;gt;D=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{y} &amp;amp; \mathbf{x}\end{array}\right].&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we define a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector with every element &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}_{12}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{u}&amp;lt;/math&amp;gt; to contain the error terms; &amp;lt;math&amp;gt;\mathbf{u}=\left[\begin{array}{c}&lt;br /&gt;
u_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
u_{12}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; the regression model can be written in terms of the three data vectors &amp;lt;math&amp;gt;\mathbf{y,1}_{12}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\mathbf{y}=\mathbf{1}_{12}\alpha+\mathbf{x}\beta+\mathbf{u.}&amp;lt;/math&amp;gt; To see this, think of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th elements of the vectors on the left and right hand sides.&lt;br /&gt;
&lt;br /&gt;
The standard next step is then to combine the data vectors for the explanatory variables into a matrix: &amp;lt;math&amp;gt;X=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{1}_{12} &amp;amp; \mathbf{x}\end{array}\right],&amp;lt;/math&amp;gt; and then define a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\boldsymbol{\delta}&amp;lt;/math&amp;gt; to contain the parameters &amp;lt;math&amp;gt;\alpha,\beta&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\boldsymbol{\delta}=\left[\begin{array}{r}&lt;br /&gt;
\alpha\\&lt;br /&gt;
\beta&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; to give the data matrix representation of the regression model as: &amp;lt;math&amp;gt;\mathbf{y}=X\boldsymbol{\delta}+\mathbf{u.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the purposes of developing the theory of regression, this is the most convenient form of the regression model. It can represent regression models with any number of explanatory variables, and thus any number of parameters. The obvious point is that a knowledge of vector and matrix operations is needed to use and understand this form.&lt;br /&gt;
&lt;br /&gt;
We shall see later that there are two particular matrix and vector quantities associated with a regression model. The first is the matrix &amp;lt;math&amp;gt;X^{T}X,&amp;lt;/math&amp;gt; and the second the vector &amp;lt;math&amp;gt;X^{T}\mathbf{y.}&amp;lt;/math&amp;gt; The following Matlab code snippet provides the numerical values of these quantities for the weight data:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; dset = load(’weights.mat’); &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xtx = dset.X’ * dset.X; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xty = dset.X’ * dset.y; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xtx) &lt;br /&gt;
&lt;br /&gt;
 12     802&lt;br /&gt;
&lt;br /&gt;
802   53792&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xty)&lt;br /&gt;
&lt;br /&gt;
  1850&lt;br /&gt;
&lt;br /&gt;
124528&lt;br /&gt;
&lt;br /&gt;
Hand calculation is of course possible, but not recommended.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=LNotes&amp;diff=3024</id>
		<title>LNotes</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=LNotes&amp;diff=3024"/>
				<updated>2013-09-10T13:55:59Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Matrices =&lt;br /&gt;
&lt;br /&gt;
In the PreSession Maths course, a matrix was defined as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;A matrix is a rectangular array of numbers enclosed in parentheses, con-&lt;br /&gt;
&lt;br /&gt;
ventionally denoted by a capital letter. The number of rows (say &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt;) and&lt;br /&gt;
&lt;br /&gt;
the number of columns (say &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;) determine the order of the matrix (&amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\times&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;).&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
Two examples were given:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
P &amp;amp; =\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 3 &amp;amp; 4\\&lt;br /&gt;
3 &amp;amp; 1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ Q=\left[\begin{array}{rr}&lt;br /&gt;
2 &amp;amp; 3\\&lt;br /&gt;
4 &amp;amp; 3\\&lt;br /&gt;
1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
matrices of dimensions &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;3\times2&amp;lt;/math&amp;gt; respectively.&lt;br /&gt;
&lt;br /&gt;
Why study matrices for econometrics? Basically because a data set of several variables, e.g. on the weights and heights of 12 students, can be thought of as a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
D &amp;amp; =\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The properties of matrices can then be used to facilitate answering all the usual questions of econometrics - list not given here!&lt;br /&gt;
&lt;br /&gt;
Calculations with matrices with explicit numerical elements, as in the examples above is called matrix &amp;#039;&amp;#039;arithmetic&amp;#039;&amp;#039;. Matrix &amp;#039;&amp;#039;algebra&amp;#039;&amp;#039; is the algebra of matrices where the elements are not made explicit: this is what is really required for econometrics, as we shall see.&lt;br /&gt;
&lt;br /&gt;
As an example of this, a &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix might be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{ccc}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and would equal &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; above if the collection of &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; were given appropriate numerical values.&lt;br /&gt;
&lt;br /&gt;
A general &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is also a &amp;#039;&amp;#039;typical element &amp;#039;&amp;#039;notation for matrices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left\Vert a_{ij}\right\Vert ,\ \ \ \ \ i=1,...,m,j=1,...,n,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; is the element at the intersection of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row and &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th column in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;m\neq n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;#039;&amp;#039;rectangular &amp;#039;&amp;#039;matrix; when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a square matrix, having the same number of rows or columns.&lt;br /&gt;
&lt;br /&gt;
== Rows, columns and vectors ==&lt;br /&gt;
&lt;br /&gt;
Clearly, there is no reason why &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; cannot equal 1: so, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix with &amp;lt;math&amp;gt;n=1,&amp;lt;/math&amp;gt; i.e. with one column, is usually called a column vector. Similarly, a matrix with one row is a row vector.&lt;br /&gt;
&lt;br /&gt;
There are a lot of advantages to thinking of matrices as collections of row or column vectors, as we shall see. As an example, define the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; column vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\mathbf{,\ \ \ b}=\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and arrange as the columns of the &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\label{eq:axy}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, a column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; elements can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What happens when both &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; are equal to &amp;lt;math&amp;gt;1?&amp;lt;/math&amp;gt; Then, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, but it is also considered to be a real number, or &amp;#039;&amp;#039;scalar&amp;#039;&amp;#039; in the language of linear algebra:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[a_{11}\right]=a_{11}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is perhaps a little odd, but turns out to be a useful convention in a number of situations.&lt;br /&gt;
&lt;br /&gt;
== Transposition of vectors ==&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; in equation (1) can be seen as elements of column vectors, say:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],\ \ \ \boldsymbol{d}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This representation of row vectors as column vectors is a bit clumsy, so some transformation which converts a column vector into a row vector, and vice versa would be useful. The process of converting a column vector &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; into a row vector is called &amp;#039;&amp;#039;transposition, &amp;#039;&amp;#039;and the transposed version of &amp;lt;math&amp;gt;\mathbf{c}&amp;lt;/math&amp;gt; is denoted:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c}^{T} &amp;amp; =\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; superscript denoting transposition. In practice, a prime, &amp;lt;math&amp;gt;^{\prime},&amp;lt;/math&amp;gt; is used instead of &amp;lt;math&amp;gt;^{T}.&amp;lt;/math&amp;gt; However, whilst the prime is much simpler to write than the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; sign, it is also much easier to lose track of in writing out long or complicated expressions. So, it is best initially to use &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; to denote transposition rather than the prime &amp;lt;math&amp;gt;^{\prime}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can then be written via its rows as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
\mathbf{c}^{T}\\&lt;br /&gt;
\boldsymbol{d}^{T}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The same ideas can be applied to the matrices &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Q.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Operations with matrices =&lt;br /&gt;
&lt;br /&gt;
== Addition, subtraction and scalar multiplication ==&lt;br /&gt;
&lt;br /&gt;
For vectors, addition and subtraction are defined only for vectors of the same dimensions. If:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
y_{n}&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x+y} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}+y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}+y_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{x-y}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}-y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}-y_{n}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clearly, the addition or subtraction operation is &amp;#039;&amp;#039;elementwise. &amp;#039;&amp;#039;If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; have different dimensions, there will be some elements left over once all the elements of the smaller dimensioned vector have been used up.&lt;br /&gt;
&lt;br /&gt;
Another operation is &amp;#039;&amp;#039;scalar multiplication&amp;#039;&amp;#039;: if &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; is a real number or scalar, the product &amp;lt;math&amp;gt;\lambda\mathbf{x}&amp;lt;/math&amp;gt; is defined as: &amp;lt;math&amp;gt;\lambda\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that every element of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is multiplied by the same scalar &amp;lt;math&amp;gt;\lambda.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The two types of operation can be combined into the &amp;#039;&amp;#039;linear combination&amp;#039;&amp;#039; of vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right]+\left[\begin{array}{c}&lt;br /&gt;
\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mu y_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}+\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}+\mu y_{n}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equally, one can define the linear combination of vectors &amp;lt;math&amp;gt;\mathbf{x,y,}\ldots,\mathbf{z}&amp;lt;/math&amp;gt; by scalars &amp;lt;math&amp;gt;\lambda,\mu,\ldots,\nu&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}+\ldots+\nu\mathbf{z}&amp;lt;/math&amp;gt; with typical element: &amp;lt;math&amp;gt;\lambda x_{i}+\mu y_{i}+\ldots+\nu z_{i},&amp;lt;/math&amp;gt; provided that all the vectors have the same dimension.&lt;br /&gt;
&lt;br /&gt;
For matrices, these ideas carry over immediately: apply to each column of the matrices involved. For example, if &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{b}_{n}\end{array}\right],&amp;lt;/math&amp;gt; both &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; then addition and subtraction are defined elementwise, as for vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A+B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}+\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}+\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}+b_{ij}\right\Vert ,\\&lt;br /&gt;
A-B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}-\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}-\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}-b_{ij}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scalar multiplication of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; involves multiplying every column vector of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda,&amp;lt;/math&amp;gt; and therefore multiplying every element of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda A=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}\right\Vert .&amp;lt;/math&amp;gt; With the same idea for &amp;lt;math&amp;gt;B,&amp;lt;/math&amp;gt; the linear combination of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mu&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\lambda A+\mu B=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1}+\mu\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}+\mu\mathbf{b}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}+\mu b_{ij}\right\Vert .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, consider the matrices: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\lambda=1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mu=-2:&amp;lt;/math&amp;gt; then:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\lambda A+\mu B &amp;amp; = &amp;amp; A-2B\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
4 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; 7&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - vector products ==&lt;br /&gt;
&lt;br /&gt;
=== Inner product ===&lt;br /&gt;
&lt;br /&gt;
The simplest form of a matrix vector product is the case where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; consists of one row, so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;1\times n&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\mathbf{a}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right].&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the product &amp;lt;math&amp;gt;A\mathbf{x}=\mathbf{a}^{T}\mathbf{x}&amp;lt;/math&amp;gt; is called the &amp;#039;&amp;#039;inner product&amp;#039;&amp;#039; and is defined as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a}^{T}\mathbf{x} &amp;amp; =a_{1}x_{1}+\ldots+a_{n}x_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that the definition amounts to multiplying corresponding elements in &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and adding up the resultant products. Writing: &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x=}\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=a_{1}x_{1}+\ldots+a_{n}x_{n}&amp;lt;/math&amp;gt; motivates the familiar description of the &amp;#039;&amp;#039;across and down rule &amp;#039;&amp;#039;for this product: &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; is the ’multiply corresponding elements’ part of the definition.&lt;br /&gt;
&lt;br /&gt;
Notice that the result of the inner product is a real number, for example: &amp;lt;math&amp;gt;\mathbf{c}^{T}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{c}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=36+6=42.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, in the product &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x,}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have the same number of elements, &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; say, for the product to be defined. If &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; had different numbers of elements, there would be some elements of &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; left over or not used in the product: e.g.: &amp;lt;math&amp;gt;\mathbf{b}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{x=}\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; When the inner product of two vectors is defined, the vectors are said to be &amp;#039;&amp;#039;conformable&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Orthogonality ==&lt;br /&gt;
&lt;br /&gt;
Two vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; with the property that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0&amp;lt;/math&amp;gt; are said to be orthogonal to each other. For example, if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
-1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is clear that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0.&amp;lt;/math&amp;gt; This seems a rather innocuous definition, and yet the idea of orthogonality turns out to be extremely important in econometrics.&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; are thought of as points in &amp;lt;math&amp;gt;R^{2},&amp;lt;/math&amp;gt; and arrows are drawn from the origin to &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and to &amp;lt;math&amp;gt;\mathbf{y,}&amp;lt;/math&amp;gt; then the two arrows are perpendicular to each other - see Figure [orthy&amp;lt;sub&amp;gt;e&amp;lt;/sub&amp;gt;xample]. If &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; were defined as: &amp;lt;math&amp;gt;\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the position of the &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; vector and the corresponding arrow would change, but the perpendicularity property would still hold.&lt;br /&gt;
Figure 1:&lt;br /&gt;
[[Media:orthy_example.png]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Matrix - vector products ===&lt;br /&gt;
&lt;br /&gt;
Since the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; has two rows, now denoted &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{1}^{T}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{2}^{T},&amp;lt;/math&amp;gt; there are two possible inner products with the vector:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]:\\&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x} &amp;amp; = &amp;amp; 42,\ \ \ \ \ \boldsymbol{\alpha}_{2}^{T}\mathbf{x}=33.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assembling the two inner product values into a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector defines the product of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; with the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x}\\&lt;br /&gt;
\boldsymbol{\alpha}_{2}^{T}\mathbf{x}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Focussing only on the part: &amp;lt;math&amp;gt;\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; one can see that each element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is obtained from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; argument.&lt;br /&gt;
&lt;br /&gt;
Sometimes this product is described as forming a &amp;#039;&amp;#039;linear combination &amp;#039;&amp;#039;of the columns of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; using the scalar elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=6\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]+3\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; More generally, if:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right],\ \ \ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
\lambda\\&lt;br /&gt;
\mu&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
A\mathbf{x} &amp;amp; = &amp;amp; \lambda\mathbf{a}+\mu\mathbf{b.}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The general version of these ideas for an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \mathbf{a}_{2} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right].&amp;lt;/math&amp;gt; is straightforward. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, then the vector &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is, by the &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
a_{11}x_{1}+\ldots+a_{1n}x_{n}\\&lt;br /&gt;
a_{21}x_{1}+\ldots+a_{2n}x_{n}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{m1}x_{1}+\ldots+a_{mn}x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{1j}x_{j}\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{2j}x_{j}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{mj}x_{j}&lt;br /&gt;
\end{array}\right],\label{eq:ab}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that the typical element, the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th, is &amp;lt;math&amp;gt;\sum\limits _{j=1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt; Equally, &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is the linear combination &amp;lt;math&amp;gt;\mathbf{a}_{1}x_{1}+\ldots+\mathbf{a}_{n}x_{n}&amp;lt;/math&amp;gt; of the columns of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - matrix products ==&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{a}_{1},\ldots,\mathbf{a}_{n},&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{b}_{1},\ldots,\mathbf{b}_{r}.&amp;lt;/math&amp;gt; Clearly, each product &amp;lt;math&amp;gt;A\mathbf{b}_{1},...,A\mathbf{b}_{r}&amp;lt;/math&amp;gt; exists, and is &amp;lt;math&amp;gt;m\times1.&amp;lt;/math&amp;gt; These products can be arranged as the columns of a matrix as &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]&amp;lt;/math&amp;gt; and this matrix is &amp;#039;&amp;#039;defined&amp;#039;&amp;#039; to be the product &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; of the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]=AB.&amp;lt;/math&amp;gt; By construction, this must be an &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix, since each column is &amp;lt;math&amp;gt;m\times1&amp;lt;/math&amp;gt; and there are &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; columns.&lt;br /&gt;
&lt;br /&gt;
This is not the usual presentation of the definition of the product of two matrices, which relies on the &amp;#039;&amp;#039;across and down rule&amp;#039;&amp;#039; mentioned earlier, and focusses on the elements of each matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt; Set:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \mathbf{b}_{2} &amp;amp; \ldots &amp;amp; \mathbf{b}_{r}\end{array}\right]\text{\,\,\,\,\,\,\,(by columns)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert b_{ik}\right\Vert ,\ \ \ \ \ i=1,...,n,k=1,...,r\text{ \,\,\,\,\,\,(typical element)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\text{\,\,\,\,\,\,\,(the array)}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What does the typical element of the &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; look like? Start with the &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; which is &amp;lt;math&amp;gt;A\mathbf{b}_{k}.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element in &amp;lt;math&amp;gt;A\mathbf{b}_{k}&amp;lt;/math&amp;gt; is, from equation (2), the inner product of the elements of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\end{array}\right],&amp;lt;/math&amp;gt; with the elements of &amp;lt;math&amp;gt;\mathbf{b}_{k},&amp;lt;/math&amp;gt; so that the inner product is: &amp;lt;math&amp;gt;a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, the &amp;lt;math&amp;gt;ik&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;c_{ik}=a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt; We can see this arising from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; calculation by writing:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\label{eq:c_ab}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1k} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2k} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nk} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert \sum_{j=1}^{n}a_{ij}b_{jk}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These ideas are simple, but a little tedious. Numerical examples are equally tedious! As an example, using: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; we can find the matrix &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; such that&lt;br /&gt;
&lt;br /&gt;
# the first column of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; adds together the columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the second column is the difference of the first and second columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the third column is &amp;lt;math&amp;gt;2\times&amp;lt;/math&amp;gt; the first column of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the fourth column is zero.&lt;br /&gt;
&lt;br /&gt;
It is easy to check that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cccc}&lt;br /&gt;
8 &amp;amp; 4 &amp;amp; 12 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; -2 &amp;amp; 6 &amp;amp; 0&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Arithmetic calculations of matrix products almost always use the elementwise across and down formula. However, there are many situations in econometrics where algebraic rather than arithmetic arguments are required. In these cases, the viewpoint of matrix multiplication as linear combinations of columns is much more powerful.&lt;br /&gt;
&lt;br /&gt;
Clearly one can give many more examples of different dimensions and complexities - but the same basic rules apply. To multiply two matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; together, the number of columns in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; must match the number of rows in &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; - this is &amp;#039;&amp;#039;conformability&amp;#039;&amp;#039; in action again. The resulting product will have number of rows equal to the number in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and number of columns equal to the number in &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this conformability rule does not hold, then the product of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is not defined.&lt;br /&gt;
&lt;br /&gt;
== Matlab ==&lt;br /&gt;
&lt;br /&gt;
One should also say that as the dimensions of the matrices increases, so the tediousness of the calculations increase. The solution to this for numerical calculation is to appeal to the computer. Programs like Matlab and Excel (and a number of others, some of them free) resolve this difficulty easily.&lt;br /&gt;
&lt;br /&gt;
In Matlab, symbols for row or column vectors do not need any particular differentiation: they are distinguished by how they are defined. For example, the following Matlab commands define &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;as a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; as a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector, then display the contents of these variables, and do a calculation:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec = [1 2 3 4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec = [1;2;3;4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec&lt;br /&gt;
&lt;br /&gt;
rowvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec&lt;br /&gt;
&lt;br /&gt;
colvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec*colvec&lt;br /&gt;
&lt;br /&gt;
ans =&lt;br /&gt;
&lt;br /&gt;
30 &lt;br /&gt;
&lt;br /&gt;
So, the semi-colon indicates the end of a row in a matrix or vector; it can be replaced by a carriage return. Notice the difference in how a row vector and a column vector is defined. One can see that the product &amp;lt;code&amp;gt;rowvec*colvec&amp;lt;/code&amp;gt; is well defined, just because &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Matlab also allows elementwise multiplication of two vectors using the &amp;lt;math&amp;gt;\centerdot\ast&amp;lt;/math&amp;gt; operator: if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
y_{2}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{x}\centerdot\ast\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}y_{1}\\&lt;br /&gt;
x_{2}y_{2}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and one can see that the inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; can be obtained as the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; In Matlab, this would be obtained as: &amp;lt;math&amp;gt;\text{sum}\left(\mathbf{x}\centerdot\ast\mathbf{y}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the example above, this calculation fails since &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; sum(rowvec .* colvec) ??? &lt;br /&gt;
&lt;br /&gt;
Error using ==&amp;amp;gt; times Matrix dimensions must agree. &lt;br /&gt;
&lt;br /&gt;
For this to work, &amp;lt;code&amp;gt;rowvec&amp;lt;/code&amp;gt; would have to be transposed as &amp;lt;code&amp;gt;rowvec’&amp;lt;/code&amp;gt;, so that transposition in Matlab is very natural.&lt;br /&gt;
&lt;br /&gt;
Allowing for such difficulties, matrix multiplication in Matlab is very simple:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; A = [6 2; 3 5];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; B = [1 1 2 0;1 -1 0 0];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = A * B; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
 8 4 1&lt;br /&gt;
&lt;br /&gt;
 2 0 8 &lt;br /&gt;
&lt;br /&gt;
-2 6 0 &lt;br /&gt;
&lt;br /&gt;
Notice how the matrices are defined here through their rows. The &amp;lt;code&amp;gt;disp() &amp;lt;/code&amp;gt;command displays the contents of the object referred to.&lt;br /&gt;
&lt;br /&gt;
It is less natural in Matlab to define matrices by columns - a typical example of how mathematics and computing have conflicts of notation. However, once columns &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}&amp;lt;/math&amp;gt; have been defined, the concatenation operation &amp;lt;math&amp;gt;\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]&amp;lt;/math&amp;gt; collects the columns into a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; a = [6;2]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; b = [3;5]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = [a b]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
6 3 &lt;br /&gt;
&lt;br /&gt;
2 5 &lt;br /&gt;
&lt;br /&gt;
Notice that the &amp;lt;code&amp;gt;disp(C)&amp;lt;/code&amp;gt; command does not label the result that is printed out. Simply typing &amp;lt;code&amp;gt;C&amp;lt;/code&amp;gt; would preface the output by &amp;lt;code&amp;gt;C =&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Pre and Post Multiplication ==&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; as above, say that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;pre-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; and that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;post-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This distinction between &amp;#039;&amp;#039;pre &amp;#039;&amp;#039;and &amp;#039;&amp;#039;post &amp;#039;&amp;#039;multiplication is important, in the following sense. Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are matrices such that the products &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined. If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; rows for &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; to be defined. For &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; to be defined, &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; columns to match the &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when both products are defined, there is no reason for the two products coincide. The first thing to notice is that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;m\times m,&amp;lt;/math&amp;gt; matrix, whilst &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; matrix. Different sized matrices cannot be equal. To illustrate, use the matrices: &amp;lt;math&amp;gt;B_{2}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right],\ \ \ C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]:&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B_{2}C &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrr}&lt;br /&gt;
27 &amp;amp; -3 &amp;amp; -15\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-15 &amp;amp; -1 &amp;amp; 8&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
CB_{2} &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
49 &amp;amp; -11\\&lt;br /&gt;
31 &amp;amp; 15&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; matrices, the products can differ: for example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
8 &amp;amp; 4\\&lt;br /&gt;
8 &amp;amp; -2&lt;br /&gt;
\end{array}\right],\ \ \ \ \ BA=\left[\begin{array}{cc}&lt;br /&gt;
9 &amp;amp; 7\\&lt;br /&gt;
3 &amp;amp; -3&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In cases where &amp;lt;math&amp;gt;AB=BA,&amp;lt;/math&amp;gt; the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are said to &amp;#039;&amp;#039;commute&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Transposition ==&lt;br /&gt;
&lt;br /&gt;
A column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; can be converted to a row vector &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by transposition: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ \mathbf{x}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
x_{1} &amp;amp; \ldots &amp;amp; x_{n}\end{array}\right].&amp;lt;/math&amp;gt; Transposing &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;\left(\mathbf{x}^{T}\right)^{T}&amp;lt;/math&amp;gt; reproduces the original vector &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; How do these ideas carry over to matrices?&lt;br /&gt;
&lt;br /&gt;
If the &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right],&amp;lt;/math&amp;gt; the transpose of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; is defined as the matrix whose &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; are &amp;lt;math&amp;gt;\mathbf{a}_{i}^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{c}&lt;br /&gt;
\mathbf{a}_{1}^{T}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mathbf{a}_{n}^{T}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; In terms of elements, if: &amp;lt;math&amp;gt;\mathbf{a}_{i}=\left[\begin{array}{c}&lt;br /&gt;
a_{1i}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{ni}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ A^{T}=\left[\begin{array}{rrrrr}&lt;br /&gt;
a_{11} &amp;amp; \ldots &amp;amp; a_{i1} &amp;amp; \ldots &amp;amp; a_{m1}\\&lt;br /&gt;
a_{12} &amp;amp; \ldots &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{m2}\\&lt;br /&gt;
\vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{1n} &amp;amp; \ldots &amp;amp; a_{in} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; One can see that the first column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; has now become the first row of &amp;lt;math&amp;gt;A^{T}.&amp;lt;/math&amp;gt; Notice too that &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times m&amp;lt;/math&amp;gt; matrix if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix.&lt;br /&gt;
&lt;br /&gt;
Transposing &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; takes the first column of &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; and writes it as a row, which coincides with the first row of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; The same argument applies to the other columns of &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\left(A^{T}\right)^{T}=A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== The product rule for transposition ===&lt;br /&gt;
&lt;br /&gt;
This states that if &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;C^{T}=B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How to see this? Consider the following example: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; b_{13} &amp;amp; b_{14}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; b_{23} &amp;amp; b_{24}\\&lt;br /&gt;
b_{31} &amp;amp; b_{32} &amp;amp; b_{33} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;c_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=\sum_{k=1}^{3}a_{2k}b_{k3}.\label{eq:c23}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that: &amp;lt;math&amp;gt;B^{T}A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
b_{11} &amp;amp; b_{21} &amp;amp; b_{31}\\&lt;br /&gt;
b_{12} &amp;amp; b_{22} &amp;amp; b_{32}\\&lt;br /&gt;
b_{13} &amp;amp; b_{23} &amp;amp; b_{33}\\&lt;br /&gt;
b_{14} &amp;amp; b_{24} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
a_{11} &amp;amp; a_{21}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that the &amp;lt;math&amp;gt;\left(3,2\right)&amp;lt;/math&amp;gt; element of this product is actually &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;b_{13}a_{21}+b_{23}a_{22}+b_{33}a_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=c_{23}.&amp;lt;/math&amp;gt; In summation notation, we see that from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;c_{23}=\sum_{k=1}^{3}b_{k3}a_{2k},&amp;lt;/math&amp;gt; where the position of the index of summation is due to the transposition. So, in summation notation, the calculation of &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; equals that from equation (6).&lt;br /&gt;
&lt;br /&gt;
More generally, the &amp;lt;math&amp;gt;\left(i,j\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\sum_{k=1}^{3}a_{ik}b_{kj}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;\left(j,i\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt; But this means that &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; must be the transpose of &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; since the elements in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; are being written in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This &amp;#039;&amp;#039;Product Rule for Transposition&amp;#039;&amp;#039; can be applied again to find the transpose &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;C^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}=\left(B^{T}A^{T}\right)^{T}=\left(A^{T}\right)^{T}\left(B^{T}\right)^{T}=AB=C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Special Types of Matrix =&lt;br /&gt;
&lt;br /&gt;
== The zero matrix ==&lt;br /&gt;
&lt;br /&gt;
The most obvious special type of matrix is one whose elements are all zeros. In typical element notation, the zero matrix is: &amp;lt;math&amp;gt;0=\left\Vert 0\right\Vert .&amp;lt;/math&amp;gt; Since there is no indexing on the elements, it is not obvious what the dimension of this matrix is, Sometimes one writes &amp;lt;math&amp;gt;0_{mn}&amp;lt;/math&amp;gt; to indicate a zero matrix of dimension &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The same ideas apply to vectors whose elements are all zero.&lt;br /&gt;
&lt;br /&gt;
The effect of the zero matrix in any product that is defined is simple: &amp;lt;math&amp;gt;0A=0,\ \ \ \ \ B0=0.&amp;lt;/math&amp;gt; This is easy to check using the across and down rule.&lt;br /&gt;
&lt;br /&gt;
== The identity or unit matrix ==&lt;br /&gt;
&lt;br /&gt;
Vectors of the form:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }2\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }3\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ldots,\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }n\ \text{dimensions}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
are called coordinate vectors. They are often given a characteristic notation, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; dimensions. When arranged as columns of a matrix in the natural order, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; a matrix with a characteristic pattern elements emerges, with a special notation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{2}\\&lt;br /&gt;
\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \mathbf{e}_{3}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{3}\\&lt;br /&gt;
\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \ldots &amp;amp; \mathbf{e}_{n}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;diagonal&amp;#039;&amp;#039; of this matrix is where the 1 elements are located, and every other element is zero.&lt;br /&gt;
&lt;br /&gt;
Consider the effect of &amp;lt;math&amp;gt;I_{2}&amp;lt;/math&amp;gt; on the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; by both pre and post multiplication:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
I_{2}A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\\&lt;br /&gt;
AI_{2} &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
as is easily checked by the across and down rule.&lt;br /&gt;
&lt;br /&gt;
Because any matrix is left unchanged by pre or post multiplication by an appropriately dimensioned &amp;lt;math&amp;gt;I_{n},&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is called an &amp;#039;&amp;#039;identity matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Sometimes it is called a &amp;#039;&amp;#039;unit matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Notice that &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is necessarily a square matrix.&lt;br /&gt;
&lt;br /&gt;
== Diagonal matrices ==&lt;br /&gt;
&lt;br /&gt;
The identity matrix is an example of a diagonal matrix, a matrix whose elements are all zero except for those on the diagonal. Usually diagonal matrices are taken to be square, for example: &amp;lt;math&amp;gt;D=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; They also produce characteristic effects when pre or post multiplying another matrix.&lt;br /&gt;
&lt;br /&gt;
Consider the diagonal matrix: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and the products &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; as defined in the previous section:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; -4\\&lt;br /&gt;
6 &amp;amp; -10&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
BA &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; 4\\&lt;br /&gt;
-6 &amp;amp; -10&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Comparing the results, we can deduce that post multiplication by a diagonal matrix multiplies each column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by the corresponding diagonal element, whereas pre multiplication multiplies each row by the corresponding diagonal element.&lt;br /&gt;
&lt;br /&gt;
== Symmetric matrices ==&lt;br /&gt;
&lt;br /&gt;
Symmetric matrices are matrices having the property that &amp;lt;math&amp;gt;A=A^{T}.&amp;lt;/math&amp;gt; Notice that such matrices must be square, since if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and to have equality of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; they must have the same dimension, so that &amp;lt;math&amp;gt;m=n&amp;lt;/math&amp;gt; is required.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; symmetric matrix, with typical element &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{21} &amp;amp; a_{31}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22} &amp;amp; a_{32}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equality of matrices is defined as equality of all elements. This is fine on the diagonals, since &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; have the same diagonal elements. For the off diagonal elements, we end up with the requirements: &amp;lt;math&amp;gt;a_{12}=a_{21},\ \ \ a_{13}=a_{31},\ \ \ a_{23}=a_{32}&amp;lt;/math&amp;gt; or more generally: &amp;lt;math&amp;gt;a_{ij}=a_{ji}\ \ \ \ \ \text{for}\ i\neq j.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The effect of this conclusion is that in a symmetric matrix, the ’triangle’ of above diagonal elements coincides with the triangle of below diagonal elements. It is as if the upper triangle is folded over the diagonal to become the lower triangle.&lt;br /&gt;
&lt;br /&gt;
A simple example is: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 2\\&lt;br /&gt;
2 &amp;amp; 1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; A more complicated example uses the &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and calculates the &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C^{T}C &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
45 &amp;amp; 27 &amp;amp; -21\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-21 &amp;amp; -11 &amp;amp; 10&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is clearly symmetric.&lt;br /&gt;
&lt;br /&gt;
This illustrates the general proposition that if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix, the product &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is a symmetric &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix. Proof? Compute the transpose of &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; using the product rule for transposition: &amp;lt;math&amp;gt;\left(A^{T}A\right)^{T}=A^{T}\left(A^{T}\right)^{T}=A^{T}A.&amp;lt;/math&amp;gt; Since &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is equal to its transpose, it must be a symmetric matrix. Such symmetric matrices appear frequently in econometrics.&lt;br /&gt;
&lt;br /&gt;
It should be clear that diagonal matrices are symmetric, since all their off-diagonal elements are equal (zero), and thence the identity matrix &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is also symmetric.&lt;br /&gt;
&lt;br /&gt;
== The outer product ==&lt;br /&gt;
&lt;br /&gt;
The inner product of two &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}&amp;lt;/math&amp;gt;, is automatically a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; quantity, a scalar, although it can be interpreted as a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, a matrix with a single element.&lt;br /&gt;
&lt;br /&gt;
Suppose one considered the product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{x}^{T}.&amp;lt;/math&amp;gt; Is this defined? If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; then the product &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times r.&amp;lt;/math&amp;gt; Applying this logic to &amp;lt;math&amp;gt;\mathbf{xx}^{T},&amp;lt;/math&amp;gt; this is &amp;lt;math&amp;gt;\left(n\times1\right)\left(1\times n\right),&amp;lt;/math&amp;gt; so the resulting product &amp;#039;&amp;#039;is &amp;#039;&amp;#039;defined, and is an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt;&amp;#039;&amp;#039; matrix&amp;#039;&amp;#039; - the &amp;#039;&amp;#039;outer product&amp;#039;&amp;#039; of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; the word ’outer’ being used to distinguish from the inner product.&lt;br /&gt;
&lt;br /&gt;
How does the across and down rule work here? Suppose that: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Then: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right].&amp;lt;/math&amp;gt; Here, there is &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in row one of the ’matrix’ &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in column one of the matrix &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; so the across and down rule still works - it is just that there is only one product per row and column combination. So: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{cc}&lt;br /&gt;
36 &amp;amp; 18\\&lt;br /&gt;
18 &amp;amp; 9&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and it is obvious from this that &amp;lt;math&amp;gt;\mathbf{xx}^{T}&amp;lt;/math&amp;gt; is a symmetric matrix.&lt;br /&gt;
&lt;br /&gt;
One can see that this outer product need not be restricted to vectors of the same dimension. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times1,&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{xy}^{T}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
y_{1} &amp;amp; \ldots &amp;amp; y_{m}\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
x_{1}y_{1} &amp;amp; x_{1}y_{2} &amp;amp; \ldots &amp;amp; x_{1}y_{m}\\&lt;br /&gt;
x_{2}y_{1} &amp;amp; x_{2}y_{2} &amp;amp; \ldots &amp;amp; x_{2}y_{m}\\&lt;br /&gt;
\\&lt;br /&gt;
x_{n}y_{1} &amp;amp; x_{n}y_{2} &amp;amp; \ldots &amp;amp; x_{n}y_{m}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;\mathbf{xy}^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and consists of rows which are &amp;lt;math&amp;gt;\mathbf{y}^{T}&amp;lt;/math&amp;gt; multiplied by an element of the &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Another interesting and useful example involves a vector with every element equal to &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Sometimes this is written as &amp;lt;math&amp;gt;\mathbf{1}_{n}&amp;lt;/math&amp;gt; to indicate an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, and is called the &amp;#039;&amp;#039;sum vector&amp;#039;&amp;#039;. Why? Consider the impact of &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; on the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; used above: &amp;lt;math&amp;gt;\mathbf{1}_{2}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=9,&amp;lt;/math&amp;gt; i.e. an inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with the sum vector is the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; Dividing through by the number of elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; produces the average of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; - i.e. the ’sample mean’ of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The outer product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; is also interesting:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{1}_{2}\mathbf{x}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
6 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x1}_{2}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 6\\&lt;br /&gt;
3 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
showing that pre multiplication of an &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as rows of the product, whilst post multiplication of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}^{T}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as the columns of the product.&lt;br /&gt;
&lt;br /&gt;
Finally: &amp;lt;math&amp;gt;\mathbf{1}_{n}\mathbf{1}_{n}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix with every element equal to &amp;lt;math&amp;gt;1.&amp;lt;/math&amp;gt; This type of matrix also appears in econometrics!&lt;br /&gt;
&lt;br /&gt;
== Triangular matrices ==&lt;br /&gt;
&lt;br /&gt;
A square &amp;#039;&amp;#039;lower triangular &amp;#039;&amp;#039;matrix has all elements above the main diagonal equal to zero, whilst a square &amp;#039;&amp;#039;upper triangular &amp;#039;&amp;#039;matrix has all elements below the main diagonal equal to zero. A simple example of a lower triangular matrix is: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; 0\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Clearly, for this matrix, &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an upper triangular matrix.&lt;br /&gt;
&lt;br /&gt;
One can adapt the definition to rectangular matrices: for example, if two arbitrary rows are added to &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; so that it becomes &amp;lt;math&amp;gt;5\times3,&amp;lt;/math&amp;gt; it would still be considered lower triangular. Equally, if, for example, the third column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; above is removed, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is still considered lower triangular.&lt;br /&gt;
&lt;br /&gt;
Often, we use &amp;#039;&amp;#039;unit &amp;#039;&amp;#039;triangular matrices, where the diagonal elements are all equal to &amp;lt;math&amp;gt;1:&amp;lt;/math&amp;gt; e.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 1\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right].\label{eq:lt_matrix}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Partitioned matrices ==&lt;br /&gt;
&lt;br /&gt;
Sometimes, especially with big matrices, it is useful to organise the elements of the matrix into components which are themselves matrices, for example: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; 3 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 7 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 6 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; Here it would be reasonable to write: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
B_{11} &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; B_{22}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;B_{ii},i=1,2,&amp;lt;/math&amp;gt; represent &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrices. &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is an example of a &amp;#039;&amp;#039;partitioned matrix&amp;#039;&amp;#039;: that is, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; say: &amp;lt;math&amp;gt;A=\left\Vert a_{ij}\right\Vert ,&amp;lt;/math&amp;gt; where the elements of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; are organised into &amp;#039;&amp;#039;sub-matrices&amp;#039;&amp;#039;. An example might be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right],\label{eq:partition_a}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;#039;&amp;#039;sub - matrices&amp;#039;&amp;#039; in the first row block have &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; rows, and therefore &amp;lt;math&amp;gt;m-r&amp;lt;/math&amp;gt; rows in the second row block. The column blocks might be defined by (for example) 3 columns in the first column block, 4 in the second and &amp;lt;math&amp;gt;n-7&amp;lt;/math&amp;gt; in the third column block.&lt;br /&gt;
&lt;br /&gt;
Another simple example might be: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{1} &amp;amp; A_{2} &amp;amp; A_{3}\end{array}\right],\ \ \ \ \ \mathbf{x=}\left[\begin{array}{c}&lt;br /&gt;
\mathbf{x}_{1}\\&lt;br /&gt;
\mathbf{x}_{2}\\&lt;br /&gt;
\mathbf{x}_{3}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and therefore &amp;lt;math&amp;gt;A_{1},A_{2},A_{3}&amp;lt;/math&amp;gt; have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows, &amp;lt;math&amp;gt;A_{1}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{2}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{3}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; columns. The &amp;#039;&amp;#039;subvectors&amp;#039;&amp;#039; in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n_{1},n_{2}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; rows respectively, for the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; to exist.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;n_{1}+n_{2}+n_{3}=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\sum_{i=1}^{n}a_{ij}x_{j}&amp;lt;/math&amp;gt; but the summation can be broken up into the first &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=1}^{n_{1}}a_{ij}x_{j},&amp;lt;/math&amp;gt; the next &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=n_{1}+1}^{n_{1}+n_{2}}a_{ij}x_{j},&amp;lt;/math&amp;gt; and the next &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; terms; &amp;lt;math&amp;gt;\sum_{i=n_{1}+n_{2}+1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The point about the use of partitioned matrices is that the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; can be represented as: &amp;lt;math&amp;gt;A\mathbf{x}=A_{1}\mathbf{x}_{1}+A_{2}\mathbf{x}_{2}+A\mathbf{x}_{3}&amp;lt;/math&amp;gt; by applying the across and down rule to the submatrices and the subvectors, a much simpler representation than the use of summations.&lt;br /&gt;
&lt;br /&gt;
Each of the components is a conformable matrix-vector product: this is essential in any use of partitioned matrices to represent some matrix product. For example, using &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; from equation (8) and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;B=\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is easy to write:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
A_{11}B_{11}+A_{12}B_{21}+A_{13}B_{31}\\&lt;br /&gt;
A_{21}B_{11}+A_{22}B_{21}+A_{23}B_{31}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
But, what are the row dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt; What are the possible column dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Matrices, vectors and econometrics =&lt;br /&gt;
&lt;br /&gt;
The data on weights and heights for 12 students in the data matrix: &amp;lt;math&amp;gt;D=\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; would seem to be ideally suited for fitting a two variable regression model: &amp;lt;math&amp;gt;y_{i}=\alpha+\beta x_{i}+u_{i},\;\;\;\;\; i=1,...,12.&amp;lt;/math&amp;gt; Here, the first column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the weight data, the data on the dependent variable &amp;lt;math&amp;gt;y_{i},&amp;lt;/math&amp;gt; and so should be labelled &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; The second column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the data on the explanatory variable height, in the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; say, so that: &amp;lt;math&amp;gt;D=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{y} &amp;amp; \mathbf{x}\end{array}\right].&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we define a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector with every element &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}_{12}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{u}&amp;lt;/math&amp;gt; to contain the error terms; &amp;lt;math&amp;gt;\mathbf{u}=\left[\begin{array}{c}&lt;br /&gt;
u_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
u_{12}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; the regression model can be written in terms of the three data vectors &amp;lt;math&amp;gt;\mathbf{y,1}_{12}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\mathbf{y}=\mathbf{1}_{12}\alpha+\mathbf{x}\beta+\mathbf{u.}&amp;lt;/math&amp;gt; To see this, think of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th elements of the vectors on the left and right hand sides.&lt;br /&gt;
&lt;br /&gt;
The standard next step is then to combine the data vectors for the explanatory variables into a matrix: &amp;lt;math&amp;gt;X=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{1}_{12} &amp;amp; \mathbf{x}\end{array}\right],&amp;lt;/math&amp;gt; and then define a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\boldsymbol{\delta}&amp;lt;/math&amp;gt; to contain the parameters &amp;lt;math&amp;gt;\alpha,\beta&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\boldsymbol{\delta}=\left[\begin{array}{r}&lt;br /&gt;
\alpha\\&lt;br /&gt;
\beta&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; to give the data matrix representation of the regression model as: &amp;lt;math&amp;gt;\mathbf{y}=X\boldsymbol{\delta}+\mathbf{u.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the purposes of developing the theory of regression, this is the most convenient form of the regression model. It can represent regression models with any number of explanatory variables, and thus any number of parameters. The obvious point is that a knowledge of vector and matrix operations is needed to use and understand this form.&lt;br /&gt;
&lt;br /&gt;
We shall see later that there are two particular matrix and vector quantities associated with a regression model. The first is the matrix &amp;lt;math&amp;gt;X^{T}X,&amp;lt;/math&amp;gt; and the second the vector &amp;lt;math&amp;gt;X^{T}\mathbf{y.}&amp;lt;/math&amp;gt; The following Matlab code snippet provides the numerical values of these quantities for the weight data:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; dset = load(’weights.mat’); &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xtx = dset.X’ * dset.X; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xty = dset.X’ * dset.y; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xtx) &lt;br /&gt;
&lt;br /&gt;
 12     802&lt;br /&gt;
&lt;br /&gt;
802   53792&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xty)&lt;br /&gt;
&lt;br /&gt;
  1850&lt;br /&gt;
&lt;br /&gt;
124528&lt;br /&gt;
&lt;br /&gt;
Hand calculation is of course possible, but not recommended.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:Orthy_example.png&amp;diff=3023</id>
		<title>File:Orthy example.png</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:Orthy_example.png&amp;diff=3023"/>
				<updated>2013-09-10T13:49:36Z</updated>
		
		<summary type="html">&lt;p&gt;LG: For Lecture 2, Figure 1&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For Lecture 2, Figure 1&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=LNotes&amp;diff=3022</id>
		<title>LNotes</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=LNotes&amp;diff=3022"/>
				<updated>2013-09-10T13:46:43Z</updated>
		
		<summary type="html">&lt;p&gt;LG: Created page with &amp;quot;= Matrices =  In the PreSession Maths course, a matrix was defined as follows:  &amp;lt;blockquote&amp;gt;A matrix is a rectangular array of numbers enclosed in parentheses, con-  ventional...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Matrices =&lt;br /&gt;
&lt;br /&gt;
In the PreSession Maths course, a matrix was defined as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;A matrix is a rectangular array of numbers enclosed in parentheses, con-&lt;br /&gt;
&lt;br /&gt;
ventionally denoted by a capital letter. The number of rows (say &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt;) and&lt;br /&gt;
&lt;br /&gt;
the number of columns (say &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;) determine the order of the matrix (&amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\times&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;).&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
Two examples were given:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
P &amp;amp; =\left[\begin{array}{rrr}&lt;br /&gt;
2 &amp;amp; 3 &amp;amp; 4\\&lt;br /&gt;
3 &amp;amp; 1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ Q=\left[\begin{array}{rr}&lt;br /&gt;
2 &amp;amp; 3\\&lt;br /&gt;
4 &amp;amp; 3\\&lt;br /&gt;
1 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
matrices of dimensions &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;3\times2&amp;lt;/math&amp;gt; respectively.&lt;br /&gt;
&lt;br /&gt;
Why study matrices for econometrics? Basically because a data set of several variables, e.g. on the weights and heights of 12 students, can be thought of as a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
D &amp;amp; =\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The properties of matrices can then be used to facilitate answering all the usual questions of econometrics - list not given here!&lt;br /&gt;
&lt;br /&gt;
Calculations with matrices with explicit numerical elements, as in the examples above is called matrix &amp;#039;&amp;#039;arithmetic&amp;#039;&amp;#039;. Matrix &amp;#039;&amp;#039;algebra&amp;#039;&amp;#039; is the algebra of matrices where the elements are not made explicit: this is what is really required for econometrics, as we shall see.&lt;br /&gt;
&lt;br /&gt;
As an example of this, a &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix might be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{ccc}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and would equal &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; above if the collection of &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; were given appropriate numerical values.&lt;br /&gt;
&lt;br /&gt;
A general &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is also a &amp;#039;&amp;#039;typical element &amp;#039;&amp;#039;notation for matrices:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left\Vert a_{ij}\right\Vert ,\ \ \ \ \ i=1,...,m,j=1,...,n,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt; is the element at the intersection of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row and &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th column in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;m\neq n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;#039;&amp;#039;rectangular &amp;#039;&amp;#039;matrix; when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a square matrix, having the same number of rows or columns.&lt;br /&gt;
&lt;br /&gt;
== Rows, columns and vectors ==&lt;br /&gt;
&lt;br /&gt;
Clearly, there is no reason why &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; cannot equal 1: so, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix with &amp;lt;math&amp;gt;n=1,&amp;lt;/math&amp;gt; i.e. with one column, is usually called a column vector. Similarly, a matrix with one row is a row vector.&lt;br /&gt;
&lt;br /&gt;
There are a lot of advantages to thinking of matrices as collections of row or column vectors, as we shall see. As an example, define the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; column vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\mathbf{,\ \ \ b}=\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and arrange as the columns of the &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\label{eq:axy}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, a column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; elements can be written as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What happens when both &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; are equal to &amp;lt;math&amp;gt;1?&amp;lt;/math&amp;gt; Then, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, but it is also considered to be a real number, or &amp;#039;&amp;#039;scalar&amp;#039;&amp;#039; in the language of linear algebra:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[a_{11}\right]=a_{11}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is perhaps a little odd, but turns out to be a useful convention in a number of situations.&lt;br /&gt;
&lt;br /&gt;
== Transposition of vectors ==&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; in equation (1) can be seen as elements of column vectors, say:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c} &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
2&lt;br /&gt;
\end{array}\right],\ \ \ \boldsymbol{d}=\left[\begin{array}{r}&lt;br /&gt;
3\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This representation of row vectors as column vectors is a bit clumsy, so some transformation which converts a column vector into a row vector, and vice versa would be useful. The process of converting a column vector &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; into a row vector is called &amp;#039;&amp;#039;transposition, &amp;#039;&amp;#039;and the transposed version of &amp;lt;math&amp;gt;\mathbf{c}&amp;lt;/math&amp;gt; is denoted:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{c}^{T} &amp;amp; =\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; superscript denoting transposition. In practice, a prime, &amp;lt;math&amp;gt;^{\prime},&amp;lt;/math&amp;gt; is used instead of &amp;lt;math&amp;gt;^{T}.&amp;lt;/math&amp;gt; However, whilst the prime is much simpler to write than the &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; sign, it is also much easier to lose track of in writing out long or complicated expressions. So, it is best initially to use &amp;lt;math&amp;gt;^{T}&amp;lt;/math&amp;gt; to denote transposition rather than the prime &amp;lt;math&amp;gt;^{\prime}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can then be written via its rows as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; =\left[\begin{array}{r}&lt;br /&gt;
\mathbf{c}^{T}\\&lt;br /&gt;
\boldsymbol{d}^{T}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The same ideas can be applied to the matrices &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Q.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Operations with matrices =&lt;br /&gt;
&lt;br /&gt;
== Addition, subtraction and scalar multiplication ==&lt;br /&gt;
&lt;br /&gt;
For vectors, addition and subtraction are defined only for vectors of the same dimensions. If:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
y_{n}&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x+y} &amp;amp; =\left[\begin{array}{c}&lt;br /&gt;
x_{1}+y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}+y_{n}&lt;br /&gt;
\end{array}\right],\,\,\,\,\mathbf{x-y}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}-y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}-y_{n}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clearly, the addition or subtraction operation is &amp;#039;&amp;#039;elementwise. &amp;#039;&amp;#039;If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; have different dimensions, there will be some elements left over once all the elements of the smaller dimensioned vector have been used up.&lt;br /&gt;
&lt;br /&gt;
Another operation is &amp;#039;&amp;#039;scalar multiplication&amp;#039;&amp;#039;: if &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; is a real number or scalar, the product &amp;lt;math&amp;gt;\lambda\mathbf{x}&amp;lt;/math&amp;gt; is defined as: &amp;lt;math&amp;gt;\lambda\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that every element of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is multiplied by the same scalar &amp;lt;math&amp;gt;\lambda.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The two types of operation can be combined into the &amp;#039;&amp;#039;linear combination&amp;#039;&amp;#039; of vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}&lt;br /&gt;
\end{array}\right]+\left[\begin{array}{c}&lt;br /&gt;
\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mu y_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\lambda x_{1}+\mu y_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\lambda x_{n}+\mu y_{n}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equally, one can define the linear combination of vectors &amp;lt;math&amp;gt;\mathbf{x,y,}\ldots,\mathbf{z}&amp;lt;/math&amp;gt; by scalars &amp;lt;math&amp;gt;\lambda,\mu,\ldots,\nu&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\lambda\mathbf{x}+\mu\mathbf{y}+\ldots+\nu\mathbf{z}&amp;lt;/math&amp;gt; with typical element: &amp;lt;math&amp;gt;\lambda x_{i}+\mu y_{i}+\ldots+\nu z_{i},&amp;lt;/math&amp;gt; provided that all the vectors have the same dimension.&lt;br /&gt;
&lt;br /&gt;
For matrices, these ideas carry over immediately: apply to each column of the matrices involved. For example, if &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right]&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{b}_{n}\end{array}\right],&amp;lt;/math&amp;gt; both &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; then addition and subtraction are defined elementwise, as for vectors:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A+B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}+\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}+\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}+b_{ij}\right\Vert ,\\&lt;br /&gt;
A-B &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1}-\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}-\mathbf{b}_{n}\end{array}\right]=\left\Vert a_{ij}-b_{ij}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scalar multiplication of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; involves multiplying every column vector of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda,&amp;lt;/math&amp;gt; and therefore multiplying every element of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\lambda A=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}\right\Vert .&amp;lt;/math&amp;gt; With the same idea for &amp;lt;math&amp;gt;B,&amp;lt;/math&amp;gt; the linear combination of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mu&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\lambda A+\mu B=\left[\begin{array}{rrr}&lt;br /&gt;
\lambda\mathbf{a}_{1}+\mu\mathbf{b}_{1} &amp;amp; \ldots &amp;amp; \lambda\mathbf{a}_{n}+\mu\mathbf{b}_{n}\end{array}\right]=\left\Vert \lambda a_{ij}+\mu b_{ij}\right\Vert .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, consider the matrices: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\lambda=1,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mu=-2:&amp;lt;/math&amp;gt; then:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\lambda A+\mu B &amp;amp; = &amp;amp; A-2B\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
4 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; 7&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - vector products ==&lt;br /&gt;
&lt;br /&gt;
=== Inner product ===&lt;br /&gt;
&lt;br /&gt;
The simplest form of a matrix vector product is the case where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; consists of one row, so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;1\times n&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\mathbf{a}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right].&amp;lt;/math&amp;gt; If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the product &amp;lt;math&amp;gt;A\mathbf{x}=\mathbf{a}^{T}\mathbf{x}&amp;lt;/math&amp;gt; is called the &amp;#039;&amp;#039;inner product&amp;#039;&amp;#039; and is defined as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{a}^{T}\mathbf{x} &amp;amp; =a_{1}x_{1}+\ldots+a_{n}x_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that the definition amounts to multiplying corresponding elements in &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and adding up the resultant products. Writing: &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x=}\left[\begin{array}{rrr}&lt;br /&gt;
a_{1} &amp;amp; \ldots &amp;amp; a_{n}\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=a_{1}x_{1}+\ldots+a_{n}x_{n}&amp;lt;/math&amp;gt; motivates the familiar description of the &amp;#039;&amp;#039;across and down rule &amp;#039;&amp;#039;for this product: &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; is the ’multiply corresponding elements’ part of the definition.&lt;br /&gt;
&lt;br /&gt;
Notice that the result of the inner product is a real number, for example: &amp;lt;math&amp;gt;\mathbf{c}^{T}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right],\ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{c}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 2\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=36+6=42.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, in the product &amp;lt;math&amp;gt;\mathbf{a}^{T}\mathbf{x,}&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have the same number of elements, &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; say, for the product to be defined. If &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; had different numbers of elements, there would be some elements of &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; left over or not used in the product: e.g.: &amp;lt;math&amp;gt;\mathbf{b}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
2\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{x=}\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; When the inner product of two vectors is defined, the vectors are said to be &amp;#039;&amp;#039;conformable&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Orthogonality ==&lt;br /&gt;
&lt;br /&gt;
Two vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; with the property that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0&amp;lt;/math&amp;gt; are said to be orthogonal to each other. For example, if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
-1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is clear that &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}=0.&amp;lt;/math&amp;gt; This seems a rather innocuous definition, and yet the idea of orthogonality turns out to be extremely important in econometrics.&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; are thought of as points in &amp;lt;math&amp;gt;R^{2},&amp;lt;/math&amp;gt; and arrows are drawn from the origin to &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and to &amp;lt;math&amp;gt;\mathbf{y,}&amp;lt;/math&amp;gt; then the two arrows are perpendicular to each other - see Figure [orthy&amp;lt;sub&amp;gt;e&amp;lt;/sub&amp;gt;xample]. If &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; were defined as: &amp;lt;math&amp;gt;\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
-1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; the position of the &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; vector and the corresponding arrow would change, but the perpendicularity property would still hold.&lt;br /&gt;
&lt;br /&gt;
[ht] [[Image:0C__courses_econometric_methods_yed_orthy_example.pdf|fig:]]&lt;br /&gt;
&lt;br /&gt;
[orthy&amp;lt;sub&amp;gt;e&amp;lt;/sub&amp;gt;xample]&lt;br /&gt;
&lt;br /&gt;
=== Matrix - vector products ===&lt;br /&gt;
&lt;br /&gt;
Since the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; has two rows, now denoted &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{1}^{T}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\boldsymbol{\alpha}_{2}^{T},&amp;lt;/math&amp;gt; there are two possible inner products with the vector:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{x} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]:\\&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x} &amp;amp; = &amp;amp; 42,\ \ \ \ \ \boldsymbol{\alpha}_{2}^{T}\mathbf{x}=33.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assembling the two inner product values into a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector defines the product of the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; with the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
\boldsymbol{\alpha}_{1}^{T}\mathbf{x}\\&lt;br /&gt;
\boldsymbol{\alpha}_{2}^{T}\mathbf{x}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Focussing only on the part: &amp;lt;math&amp;gt;\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{r}&lt;br /&gt;
42\\&lt;br /&gt;
33&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; one can see that each element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is obtained from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; argument.&lt;br /&gt;
&lt;br /&gt;
Sometimes this product is described as forming a &amp;#039;&amp;#039;linear combination &amp;#039;&amp;#039;of the columns of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; using the scalar elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A\mathbf{x}=6\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]+3\left[\begin{array}{r}&lt;br /&gt;
2\\&lt;br /&gt;
5&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; More generally, if:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right],\ \ \ \ \ \mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
\lambda\\&lt;br /&gt;
\mu&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
A\mathbf{x} &amp;amp; = &amp;amp; \lambda\mathbf{a}+\mu\mathbf{b.}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The general version of these ideas for an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \mathbf{a}_{2} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right].&amp;lt;/math&amp;gt; is straightforward. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, then the vector &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is, by the &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A\mathbf{x}=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
a_{11}x_{1}+\ldots+a_{1n}x_{n}\\&lt;br /&gt;
a_{21}x_{1}+\ldots+a_{2n}x_{n}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{m1}x_{1}+\ldots+a_{mn}x_{n}&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{c}&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{1j}x_{j}\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{2j}x_{j}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\sum\limits _{j=1}^{n}a_{mj}x_{j}&lt;br /&gt;
\end{array}\right],\label{eq:ab}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so that the typical element, the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th, is &amp;lt;math&amp;gt;\sum\limits _{j=1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt; Equally, &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is the linear combination &amp;lt;math&amp;gt;\mathbf{a}_{1}x_{1}+\ldots+\mathbf{a}_{n}x_{n}&amp;lt;/math&amp;gt; of the columns of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Matrix - matrix products ==&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{a}_{1},\ldots,\mathbf{a}_{n},&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; with columns &amp;lt;math&amp;gt;\mathbf{b}_{1},\ldots,\mathbf{b}_{r}.&amp;lt;/math&amp;gt; Clearly, each product &amp;lt;math&amp;gt;A\mathbf{b}_{1},...,A\mathbf{b}_{r}&amp;lt;/math&amp;gt; exists, and is &amp;lt;math&amp;gt;m\times1.&amp;lt;/math&amp;gt; These products can be arranged as the columns of a matrix as &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]&amp;lt;/math&amp;gt; and this matrix is &amp;#039;&amp;#039;defined&amp;#039;&amp;#039; to be the product &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; of the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrrr}&lt;br /&gt;
A\mathbf{b}_{1} &amp;amp; A\mathbf{b}_{2} &amp;amp; \ldots &amp;amp; A\mathbf{b}_{r}\end{array}\right]=AB.&amp;lt;/math&amp;gt; By construction, this must be an &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix, since each column is &amp;lt;math&amp;gt;m\times1&amp;lt;/math&amp;gt; and there are &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; columns.&lt;br /&gt;
&lt;br /&gt;
This is not the usual presentation of the definition of the product of two matrices, which relies on the &amp;#039;&amp;#039;across and down rule&amp;#039;&amp;#039; mentioned earlier, and focusses on the elements of each matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt; Set:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{b}_{1} &amp;amp; \mathbf{b}_{2} &amp;amp; \ldots &amp;amp; \mathbf{b}_{r}\end{array}\right]\text{\,\,\,\,\,\,\,(by columns)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert b_{ik}\right\Vert ,\ \ \ \ \ i=1,...,n,k=1,...,r\text{ \,\,\,\,\,\,(typical element)}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\text{\,\,\,\,\,\,\,(the array)}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What does the typical element of the &amp;lt;math&amp;gt;m\times r&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; look like? Start with the &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; which is &amp;lt;math&amp;gt;A\mathbf{b}_{k}.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element in &amp;lt;math&amp;gt;A\mathbf{b}_{k}&amp;lt;/math&amp;gt; is, from equation ([eq:ab]), the inner product of the elements of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left[\begin{array}{rrrr}&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\end{array}\right],&amp;lt;/math&amp;gt; with the elements of &amp;lt;math&amp;gt;\mathbf{b}_{k},&amp;lt;/math&amp;gt; so that the inner product is: &amp;lt;math&amp;gt;a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, the &amp;lt;math&amp;gt;ik&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;c_{ik}=a_{i1}b_{1k}+a_{i2}b_{2k}+\ldots+a_{in}b_{nk}=\sum_{j=1}^{n}a_{ij}b_{jk}.&amp;lt;/math&amp;gt; We can see this arising from an &amp;#039;&amp;#039;across and down&amp;#039;&amp;#039; calculation by writing:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\label{eq:c_ab}\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; \ldots &amp;amp; a_{2n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; \ldots &amp;amp; b_{1k} &amp;amp; \ldots &amp;amp; b_{1r}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; \ldots &amp;amp; b_{2k} &amp;amp; \ldots &amp;amp; b_{2r}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
b_{n1} &amp;amp; b_{n2} &amp;amp; \ldots &amp;amp; b_{nk} &amp;amp; \ldots &amp;amp; b_{nr}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left\Vert \sum_{j=1}^{n}a_{ij}b_{jk}\right\Vert .\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These ideas are simple, but a little tedious. Numerical examples are equally tedious! As an example, using: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; we can find the matrix &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; such that&lt;br /&gt;
&lt;br /&gt;
# the first column of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; adds together the columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the second column is the difference of the first and second columns of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the third column is &amp;lt;math&amp;gt;2\times&amp;lt;/math&amp;gt; the first column of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt;&lt;br /&gt;
# the fourth column is zero.&lt;br /&gt;
&lt;br /&gt;
It is easy to check that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C &amp;amp; = &amp;amp; AB\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
1 &amp;amp; -1 &amp;amp; 0 &amp;amp; 0&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{cccc}&lt;br /&gt;
8 &amp;amp; 4 &amp;amp; 12 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; -2 &amp;amp; 6 &amp;amp; 0&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Arithmetic calculations of matrix products almost always use the elementwise across and down formula. However, there are many situations in econometrics where algebraic rather than arithmetic arguments are required. In these cases, the viewpoint of matrix multiplication as linear combinations of columns is much more powerful.&lt;br /&gt;
&lt;br /&gt;
Clearly one can give many more examples of different dimensions and complexities - but the same basic rules apply. To multiply two matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; together, the number of columns in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; must match the number of rows in &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; - this is &amp;#039;&amp;#039;conformability&amp;#039;&amp;#039; in action again. The resulting product will have number of rows equal to the number in &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and number of columns equal to the number in &amp;lt;math&amp;gt;B.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this conformability rule does not hold, then the product of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is not defined.&lt;br /&gt;
&lt;br /&gt;
== Matlab ==&lt;br /&gt;
&lt;br /&gt;
One should also say that as the dimensions of the matrices increases, so the tediousness of the calculations increase. The solution to this for numerical calculation is to appeal to the computer. Programs like Matlab and Excel (and a number of others, some of them free) resolve this difficulty easily.&lt;br /&gt;
&lt;br /&gt;
In Matlab, symbols for row or column vectors do not need any particular differentiation: they are distinguished by how they are defined. For example, the following Matlab commands define &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;as a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; as a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector, then display the contents of these variables, and do a calculation:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec = [1 2 3 4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec = [1;2;3;4]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec&lt;br /&gt;
&lt;br /&gt;
rowvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; colvec&lt;br /&gt;
&lt;br /&gt;
colvec =&lt;br /&gt;
&lt;br /&gt;
1 2 3 4 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; rowvec*colvec&lt;br /&gt;
&lt;br /&gt;
ans =&lt;br /&gt;
&lt;br /&gt;
30 &lt;br /&gt;
&lt;br /&gt;
So, the semi-colon indicates the end of a row in a matrix or vector; it can be replaced by a carriage return. Notice the difference in how a row vector and a column vector is defined. One can see that the product &amp;lt;code&amp;gt;rowvec*colvec&amp;lt;/code&amp;gt; is well defined, just because &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Matlab also allows elementwise multiplication of two vectors using the &amp;lt;math&amp;gt;\centerdot\ast&amp;lt;/math&amp;gt; operator: if: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
x_{2}&lt;br /&gt;
\end{array}\right],\ \ \ \mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
y_{1}\\&lt;br /&gt;
y_{2}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{x}\centerdot\ast\mathbf{y}=\left[\begin{array}{r}&lt;br /&gt;
x_{1}y_{1}\\&lt;br /&gt;
x_{2}y_{2}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and one can see that the inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; can be obtained as the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; In Matlab, this would be obtained as: &amp;lt;math&amp;gt;\text{sum}\left(\mathbf{x}\centerdot\ast\mathbf{y}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the example above, this calculation fails since &amp;lt;code&amp;gt;rowvec &amp;lt;/code&amp;gt;is a &amp;lt;math&amp;gt;1\times4&amp;lt;/math&amp;gt; vector, and &amp;lt;code&amp;gt;colvec&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;4\times1&amp;lt;/math&amp;gt; vector:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; sum(rowvec .* colvec) ??? &lt;br /&gt;
&lt;br /&gt;
Error using ==&amp;amp;gt; times Matrix dimensions must agree. &lt;br /&gt;
&lt;br /&gt;
For this to work, &amp;lt;code&amp;gt;rowvec&amp;lt;/code&amp;gt; would have to be transposed as &amp;lt;code&amp;gt;rowvec’&amp;lt;/code&amp;gt;, so that transposition in Matlab is very natural.&lt;br /&gt;
&lt;br /&gt;
Allowing for such difficulties, matrix multiplication in Matlab is very simple:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; A = [6 2; 3 5];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; B = [1 1 2 0;1 -1 0 0];&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = A * B; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
 8 4 1&lt;br /&gt;
&lt;br /&gt;
 2 0 8 &lt;br /&gt;
&lt;br /&gt;
-2 6 0 &lt;br /&gt;
&lt;br /&gt;
Notice how the matrices are defined here through their rows. The &amp;lt;code&amp;gt;disp() &amp;lt;/code&amp;gt;command displays the contents of the object referred to.&lt;br /&gt;
&lt;br /&gt;
It is less natural in Matlab to define matrices by columns - a typical example of how mathematics and computing have conflicts of notation. However, once columns &amp;lt;math&amp;gt;\mathbf{a}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{b}&amp;lt;/math&amp;gt; have been defined, the concatenation operation &amp;lt;math&amp;gt;\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{a} &amp;amp; \mathbf{b}\end{array}\right]&amp;lt;/math&amp;gt; collects the columns into a matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; a = [6;2]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; b = [3;5]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; C = [a b]; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(C)&lt;br /&gt;
&lt;br /&gt;
6 3 &lt;br /&gt;
&lt;br /&gt;
2 5 &lt;br /&gt;
&lt;br /&gt;
Notice that the &amp;lt;code&amp;gt;disp(C)&amp;lt;/code&amp;gt; command does not label the result that is printed out. Simply typing &amp;lt;code&amp;gt;C&amp;lt;/code&amp;gt; would preface the output by &amp;lt;code&amp;gt;C =&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Pre and Post Multiplication ==&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; as above, say that &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;pre-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C,&amp;lt;/math&amp;gt; and that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;post-multiplied &amp;#039;&amp;#039;by &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; to get &amp;lt;math&amp;gt;C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This distinction between &amp;#039;&amp;#039;pre &amp;#039;&amp;#039;and &amp;#039;&amp;#039;post &amp;#039;&amp;#039;multiplication is important, in the following sense. Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are matrices such that the products &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined. If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; rows for &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; to be defined. For &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; to be defined, &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; columns to match the &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows in &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both defined if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when both products are defined, there is no reason for the two products coincide. The first thing to notice is that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;m\times m,&amp;lt;/math&amp;gt; matrix, whilst &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; is a square, &amp;lt;math&amp;gt;n\times n,&amp;lt;/math&amp;gt; matrix. Different sized matrices cannot be equal. To illustrate, use the matrices: &amp;lt;math&amp;gt;B_{2}=\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right],\ \ \ C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]:&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
B_{2}C &amp;amp; = &amp;amp; \left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{rrr}&lt;br /&gt;
27 &amp;amp; -3 &amp;amp; -15\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-15 &amp;amp; -1 &amp;amp; 8&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
CB_{2} &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; -3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
49 &amp;amp; -11\\&lt;br /&gt;
31 &amp;amp; 15&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when &amp;lt;math&amp;gt;m=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; are both &amp;lt;math&amp;gt;m\times m&amp;lt;/math&amp;gt; matrices, the products can differ: for example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; -1&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
8 &amp;amp; 4\\&lt;br /&gt;
8 &amp;amp; -2&lt;br /&gt;
\end{array}\right],\ \ \ \ \ BA=\left[\begin{array}{cc}&lt;br /&gt;
9 &amp;amp; 7\\&lt;br /&gt;
3 &amp;amp; -3&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In cases where &amp;lt;math&amp;gt;AB=BA,&amp;lt;/math&amp;gt; the matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; are said to &amp;#039;&amp;#039;commute&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== Transposition ==&lt;br /&gt;
&lt;br /&gt;
A column vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; can be converted to a row vector &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by transposition: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ \mathbf{x}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
x_{1} &amp;amp; \ldots &amp;amp; x_{n}\end{array}\right].&amp;lt;/math&amp;gt; Transposing &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;\left(\mathbf{x}^{T}\right)^{T}&amp;lt;/math&amp;gt; reproduces the original vector &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; How do these ideas carry over to matrices?&lt;br /&gt;
&lt;br /&gt;
If the &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can be written as &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{a}_{1} &amp;amp; \ldots &amp;amp; \mathbf{a}_{n}\end{array}\right],&amp;lt;/math&amp;gt; the transpose of &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; is defined as the matrix whose &amp;#039;&amp;#039;rows&amp;#039;&amp;#039; are &amp;lt;math&amp;gt;\mathbf{a}_{i}^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{c}&lt;br /&gt;
\mathbf{a}_{1}^{T}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
\mathbf{a}_{n}^{T}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; In terms of elements, if: &amp;lt;math&amp;gt;\mathbf{a}_{i}=\left[\begin{array}{c}&lt;br /&gt;
a_{1i}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
a_{ni}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; \ldots &amp;amp; a_{1n}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{i1} &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{in}\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{m1} &amp;amp; a_{m2} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right],\ \ \ \ \ A^{T}=\left[\begin{array}{rrrrr}&lt;br /&gt;
a_{11} &amp;amp; \ldots &amp;amp; a_{i1} &amp;amp; \ldots &amp;amp; a_{m1}\\&lt;br /&gt;
a_{12} &amp;amp; \ldots &amp;amp; a_{i2} &amp;amp; \ldots &amp;amp; a_{m2}\\&lt;br /&gt;
\vdots &amp;amp;  &amp;amp; \vdots &amp;amp;  &amp;amp; \vdots\\&lt;br /&gt;
a_{1n} &amp;amp; \ldots &amp;amp; a_{in} &amp;amp; \ldots &amp;amp; a_{mn}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; One can see that the first column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; has now become the first row of &amp;lt;math&amp;gt;A^{T}.&amp;lt;/math&amp;gt; Notice too that &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n\times m&amp;lt;/math&amp;gt; matrix if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix.&lt;br /&gt;
&lt;br /&gt;
Transposing &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; takes the first column of &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; and writes it as a row, which coincides with the first row of &amp;lt;math&amp;gt;A.&amp;lt;/math&amp;gt; The same argument applies to the other columns of &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;\left(A^{T}\right)^{T}=A.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== The product rule for transposition ===&lt;br /&gt;
&lt;br /&gt;
This states that if &amp;lt;math&amp;gt;C=AB,&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;C^{T}=B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How to see this? Consider the following example: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right],\ \ \ B=\left[\begin{array}{rrrr}&lt;br /&gt;
b_{11} &amp;amp; b_{12} &amp;amp; b_{13} &amp;amp; b_{14}\\&lt;br /&gt;
b_{21} &amp;amp; b_{22} &amp;amp; b_{23} &amp;amp; b_{24}\\&lt;br /&gt;
b_{31} &amp;amp; b_{32} &amp;amp; b_{33} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; where:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;c_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=\sum_{k=1}^{3}a_{2k}b_{k3}.\label{eq:c23}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can see that: &amp;lt;math&amp;gt;B^{T}A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
b_{11} &amp;amp; b_{21} &amp;amp; b_{31}\\&lt;br /&gt;
b_{12} &amp;amp; b_{22} &amp;amp; b_{32}\\&lt;br /&gt;
b_{13} &amp;amp; b_{23} &amp;amp; b_{33}\\&lt;br /&gt;
b_{14} &amp;amp; b_{24} &amp;amp; b_{34}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
a_{11} &amp;amp; a_{21}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and that the &amp;lt;math&amp;gt;\left(3,2\right)&amp;lt;/math&amp;gt; element of this product is actually &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;b_{13}a_{21}+b_{23}a_{22}+b_{33}a_{23}=a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}=c_{23}.&amp;lt;/math&amp;gt; In summation notation, we see that from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;c_{23}=\sum_{k=1}^{3}b_{k3}a_{2k},&amp;lt;/math&amp;gt; where the position of the index of summation is due to the transposition. So, in summation notation, the calculation of &amp;lt;math&amp;gt;c_{23}&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; equals that from equation ([eq:c23]).&lt;br /&gt;
&lt;br /&gt;
More generally, the &amp;lt;math&amp;gt;\left(i,j\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\sum_{k=1}^{3}a_{ik}b_{kj}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;\left(j,i\right)&amp;lt;/math&amp;gt; element of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt; But this means that &amp;lt;math&amp;gt;B^{T}A^{T}&amp;lt;/math&amp;gt; must be the transpose of &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; since the elements in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th row of &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; are being written in the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th column of &amp;lt;math&amp;gt;B^{T}A^{T}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This &amp;#039;&amp;#039;Product Rule for Transposition&amp;#039;&amp;#039; can be applied again to find the transpose &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;C^{T}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\left(C^{T}\right)^{T}=\left(B^{T}A^{T}\right)^{T}=\left(A^{T}\right)^{T}\left(B^{T}\right)^{T}=AB=C.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Special Types of Matrix =&lt;br /&gt;
&lt;br /&gt;
== The zero matrix ==&lt;br /&gt;
&lt;br /&gt;
The most obvious special type of matrix is one whose elements are all zeros. In typical element notation, the zero matrix is: &amp;lt;math&amp;gt;0=\left\Vert 0\right\Vert .&amp;lt;/math&amp;gt; Since there is no indexing on the elements, it is not obvious what the dimension of this matrix is, Sometimes one writes &amp;lt;math&amp;gt;0_{mn}&amp;lt;/math&amp;gt; to indicate a zero matrix of dimension &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The same ideas apply to vectors whose elements are all zero.&lt;br /&gt;
&lt;br /&gt;
The effect of the zero matrix in any product that is defined is simple: &amp;lt;math&amp;gt;0A=0,\ \ \ \ \ B0=0.&amp;lt;/math&amp;gt; This is easy to check using the across and down rule.&lt;br /&gt;
&lt;br /&gt;
== The identity or unit matrix ==&lt;br /&gt;
&lt;br /&gt;
Vectors of the form:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }2\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{c}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }3\ \text{dimensions}\\&lt;br /&gt;
\left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
1\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0&lt;br /&gt;
\end{array}\right],\ldots,\left[\begin{array}{r}&lt;br /&gt;
0\\&lt;br /&gt;
0\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
0\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\ \ \ \ \ \text{in }n\ \text{dimensions}\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
are called coordinate vectors. They are often given a characteristic notation, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; dimensions. When arranged as columns of a matrix in the natural order, &amp;lt;math&amp;gt;\mathbf{e}_{1},\ldots,\mathbf{e}_{n},&amp;lt;/math&amp;gt; a matrix with a characteristic pattern elements emerges, with a special notation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{2}\\&lt;br /&gt;
\left[\begin{array}{rrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \mathbf{e}_{3}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{3}\\&lt;br /&gt;
\left[\begin{array}{rrrr}&lt;br /&gt;
\mathbf{e}_{1} &amp;amp; \mathbf{e}_{2} &amp;amp; \ldots &amp;amp; \mathbf{e}_{n}\end{array}\right] &amp;amp; = &amp;amp; \left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; \ldots &amp;amp; 0\\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right]=I_{n}.\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;diagonal&amp;#039;&amp;#039; of this matrix is where the 1 elements are located, and every other element is zero.&lt;br /&gt;
&lt;br /&gt;
Consider the effect of &amp;lt;math&amp;gt;I_{2}&amp;lt;/math&amp;gt; on the matrix: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; by both pre and post multiplication:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
I_{2}A &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\\&lt;br /&gt;
AI_{2} &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=A,\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
as is easily checked by the across and down rule.&lt;br /&gt;
&lt;br /&gt;
Because any matrix is left unchanged by pre or post multiplication by an appropriately dimensioned &amp;lt;math&amp;gt;I_{n},&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is called an &amp;#039;&amp;#039;identity matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Sometimes it is called a &amp;#039;&amp;#039;unit matrix of dimension &amp;#039;&amp;#039;&amp;lt;math&amp;gt;n.&amp;lt;/math&amp;gt; Notice that &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is necessarily a square matrix.&lt;br /&gt;
&lt;br /&gt;
== Diagonal matrices ==&lt;br /&gt;
&lt;br /&gt;
The identity matrix is an example of a diagonal matrix, a matrix whose elements are all zero except for those on the diagonal. Usually diagonal matrices are taken to be square, for example: &amp;lt;math&amp;gt;D=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; They also produce characteristic effects when pre or post multiplying another matrix.&lt;br /&gt;
&lt;br /&gt;
Consider the diagonal matrix: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and the products &amp;lt;math&amp;gt;AB,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;BA&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; as defined in the previous section:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; -4\\&lt;br /&gt;
6 &amp;amp; -10&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
BA &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; -2&lt;br /&gt;
\end{array}\right]\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 2\\&lt;br /&gt;
3 &amp;amp; 5&lt;br /&gt;
\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
12 &amp;amp; 4\\&lt;br /&gt;
-6 &amp;amp; -10&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Comparing the results, we can deduce that post multiplication by a diagonal matrix multiplies each column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; by the corresponding diagonal element, whereas pre multiplication multiplies each row by the corresponding diagonal element.&lt;br /&gt;
&lt;br /&gt;
== Symmetric matrices ==&lt;br /&gt;
&lt;br /&gt;
Symmetric matrices are matrices having the property that &amp;lt;math&amp;gt;A=A^{T}.&amp;lt;/math&amp;gt; Notice that such matrices must be square, since if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n,&amp;lt;/math&amp;gt; &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and to have equality of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T},&amp;lt;/math&amp;gt; they must have the same dimension, so that &amp;lt;math&amp;gt;m=n&amp;lt;/math&amp;gt; is required.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; symmetric matrix, with typical element &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{12} &amp;amp; a_{13}\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; a_{23}\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; so that: &amp;lt;math&amp;gt;A^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; a_{21} &amp;amp; a_{31}\\&lt;br /&gt;
a_{12} &amp;amp; a_{22} &amp;amp; a_{32}\\&lt;br /&gt;
a_{13} &amp;amp; a_{23} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Equality of matrices is defined as equality of all elements. This is fine on the diagonals, since &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; have the same diagonal elements. For the off diagonal elements, we end up with the requirements: &amp;lt;math&amp;gt;a_{12}=a_{21},\ \ \ a_{13}=a_{31},\ \ \ a_{23}=a_{32}&amp;lt;/math&amp;gt; or more generally: &amp;lt;math&amp;gt;a_{ij}=a_{ji}\ \ \ \ \ \text{for}\ i\neq j.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The effect of this conclusion is that in a symmetric matrix, the ’triangle’ of above diagonal elements coincides with the triangle of below diagonal elements. It is as if the upper triangle is folded over the diagonal to become the lower triangle.&lt;br /&gt;
&lt;br /&gt;
A simple example is: &amp;lt;math&amp;gt;A=\left[\begin{array}{cc}&lt;br /&gt;
1 &amp;amp; 2\\&lt;br /&gt;
2 &amp;amp; 1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; A more complicated example uses the &amp;lt;math&amp;gt;2\times3&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;C=\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; and calculates the &amp;lt;math&amp;gt;3\times3&amp;lt;/math&amp;gt; matrix:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
C^{T}C &amp;amp; = &amp;amp; \left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
2 &amp;amp; 5\\&lt;br /&gt;
-3 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
6 &amp;amp; 2 &amp;amp; -3\\&lt;br /&gt;
3 &amp;amp; 5 &amp;amp; -1&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
45 &amp;amp; 27 &amp;amp; -21\\&lt;br /&gt;
27 &amp;amp; 29 &amp;amp; -11\\&lt;br /&gt;
-21 &amp;amp; -11 &amp;amp; 10&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which is clearly symmetric.&lt;br /&gt;
&lt;br /&gt;
This illustrates the general proposition that if &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix, the product &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is a symmetric &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix. Proof? Compute the transpose of &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; using the product rule for transposition: &amp;lt;math&amp;gt;\left(A^{T}A\right)^{T}=A^{T}\left(A^{T}\right)^{T}=A^{T}A.&amp;lt;/math&amp;gt; Since &amp;lt;math&amp;gt;A^{T}A&amp;lt;/math&amp;gt; is equal to its transpose, it must be a symmetric matrix. Such symmetric matrices appear frequently in econometrics.&lt;br /&gt;
&lt;br /&gt;
It should be clear that diagonal matrices are symmetric, since all their off-diagonal elements are equal (zero), and thence the identity matrix &amp;lt;math&amp;gt;I_{n}&amp;lt;/math&amp;gt; is also symmetric.&lt;br /&gt;
&lt;br /&gt;
== The outer product ==&lt;br /&gt;
&lt;br /&gt;
The inner product of two &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vectors &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\mathbf{x}^{T}\mathbf{y}&amp;lt;/math&amp;gt;, is automatically a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; quantity, a scalar, although it can be interpreted as a &amp;lt;math&amp;gt;1\times1&amp;lt;/math&amp;gt; matrix, a matrix with a single element.&lt;br /&gt;
&lt;br /&gt;
Suppose one considered the product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{x}^{T}.&amp;lt;/math&amp;gt; Is this defined? If &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times r,&amp;lt;/math&amp;gt; then the product &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times r.&amp;lt;/math&amp;gt; Applying this logic to &amp;lt;math&amp;gt;\mathbf{xx}^{T},&amp;lt;/math&amp;gt; this is &amp;lt;math&amp;gt;\left(n\times1\right)\left(1\times n\right),&amp;lt;/math&amp;gt; so the resulting product &amp;#039;&amp;#039;is &amp;#039;&amp;#039;defined, and is an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt;&amp;#039;&amp;#039; matrix&amp;#039;&amp;#039; - the &amp;#039;&amp;#039;outer product&amp;#039;&amp;#039; of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; the word ’outer’ being used to distinguish from the inner product.&lt;br /&gt;
&lt;br /&gt;
How does the across and down rule work here? Suppose that: &amp;lt;math&amp;gt;\mathbf{x}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Then: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right].&amp;lt;/math&amp;gt; Here, there is &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in row one of the ’matrix’ &amp;lt;math&amp;gt;\mathbf{x,}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt; element in column one of the matrix &amp;lt;math&amp;gt;\mathbf{x}^{T},&amp;lt;/math&amp;gt; so the across and down rule still works - it is just that there is only one product per row and column combination. So: &amp;lt;math&amp;gt;\mathbf{xx}^{T}=\left[\begin{array}{cc}&lt;br /&gt;
36 &amp;amp; 18\\&lt;br /&gt;
18 &amp;amp; 9&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and it is obvious from this that &amp;lt;math&amp;gt;\mathbf{xx}^{T}&amp;lt;/math&amp;gt; is a symmetric matrix.&lt;br /&gt;
&lt;br /&gt;
One can see that this outer product need not be restricted to vectors of the same dimension. If &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times1,&amp;lt;/math&amp;gt; then: &amp;lt;math&amp;gt;\mathbf{xy}^{T}=\left[\begin{array}{c}&lt;br /&gt;
x_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
x_{n}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rrr}&lt;br /&gt;
y_{1} &amp;amp; \ldots &amp;amp; y_{m}\end{array}\right]=\left[\begin{array}{rrrr}&lt;br /&gt;
x_{1}y_{1} &amp;amp; x_{1}y_{2} &amp;amp; \ldots &amp;amp; x_{1}y_{m}\\&lt;br /&gt;
x_{2}y_{1} &amp;amp; x_{2}y_{2} &amp;amp; \ldots &amp;amp; x_{2}y_{m}\\&lt;br /&gt;
\\&lt;br /&gt;
x_{n}y_{1} &amp;amp; x_{n}y_{2} &amp;amp; \ldots &amp;amp; x_{n}y_{m}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; So, &amp;lt;math&amp;gt;\mathbf{xy}^{T}&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;n\times m,&amp;lt;/math&amp;gt; and consists of rows which are &amp;lt;math&amp;gt;\mathbf{y}^{T}&amp;lt;/math&amp;gt; multiplied by an element of the &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; vector.&lt;br /&gt;
&lt;br /&gt;
Another interesting and useful example involves a vector with every element equal to &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Sometimes this is written as &amp;lt;math&amp;gt;\mathbf{1}_{n}&amp;lt;/math&amp;gt; to indicate an &amp;lt;math&amp;gt;n\times1&amp;lt;/math&amp;gt; vector, and is called the &amp;#039;&amp;#039;sum vector&amp;#039;&amp;#039;. Why? Consider the impact of &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; on the &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; used above: &amp;lt;math&amp;gt;\mathbf{1}_{2}^{T}\mathbf{x}=\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]\left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]=9,&amp;lt;/math&amp;gt; i.e. an inner product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with the sum vector is the sum of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt; Dividing through by the number of elements in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; produces the average of the elements of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; - i.e. the ’sample mean’ of the elements of &amp;lt;math&amp;gt;\mathbf{x.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The outer product of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\mathbf{1}_{2}&amp;lt;/math&amp;gt; is also interesting:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
\mathbf{1}_{2}\mathbf{x}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
1\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
6 &amp;amp; 3\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 3\\&lt;br /&gt;
6 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\\&lt;br /&gt;
\mathbf{x1}_{2}^{T} &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
6\\&lt;br /&gt;
3&lt;br /&gt;
\end{array}\right]\left[\begin{array}{rr}&lt;br /&gt;
1 &amp;amp; 1\end{array}\right]=\left[\begin{array}{cc}&lt;br /&gt;
6 &amp;amp; 6\\&lt;br /&gt;
3 &amp;amp; 3&lt;br /&gt;
\end{array}\right],\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
showing that pre multiplication of an &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}^{T}&amp;lt;/math&amp;gt; as rows of the product, whilst post multiplication of &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; by &amp;lt;math&amp;gt;\mathbf{1}^{T}&amp;lt;/math&amp;gt; repeats &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as the columns of the product.&lt;br /&gt;
&lt;br /&gt;
Finally: &amp;lt;math&amp;gt;\mathbf{1}_{n}\mathbf{1}_{n}^{T}=\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1\\&lt;br /&gt;
1 &amp;amp; \ldots &amp;amp; 1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix with every element equal to &amp;lt;math&amp;gt;1.&amp;lt;/math&amp;gt; This type of matrix also appears in econometrics!&lt;br /&gt;
&lt;br /&gt;
== Triangular matrices ==&lt;br /&gt;
&lt;br /&gt;
A square &amp;#039;&amp;#039;lower triangular &amp;#039;&amp;#039;matrix has all elements above the main diagonal equal to zero, whilst a square &amp;#039;&amp;#039;upper triangular &amp;#039;&amp;#039;matrix has all elements below the main diagonal equal to zero. A simple example of a lower triangular matrix is: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
a_{11} &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
a_{21} &amp;amp; a_{22} &amp;amp; 0\\&lt;br /&gt;
a_{31} &amp;amp; a_{32} &amp;amp; a_{33}&lt;br /&gt;
\end{array}\right].&amp;lt;/math&amp;gt; Clearly, for this matrix, &amp;lt;math&amp;gt;A^{T}&amp;lt;/math&amp;gt; is an upper triangular matrix.&lt;br /&gt;
&lt;br /&gt;
One can adapt the definition to rectangular matrices: for example, if two arbitrary rows are added to &amp;lt;math&amp;gt;A,&amp;lt;/math&amp;gt; so that it becomes &amp;lt;math&amp;gt;5\times3,&amp;lt;/math&amp;gt; it would still be considered lower triangular. Equally, if, for example, the third column of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; above is removed, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is still considered lower triangular.&lt;br /&gt;
&lt;br /&gt;
Often, we use &amp;#039;&amp;#039;unit &amp;#039;&amp;#039;triangular matrices, where the diagonal elements are all equal to &amp;lt;math&amp;gt;1:&amp;lt;/math&amp;gt; e.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\left[\begin{array}{rrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; 1\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{array}\right].\label{eq:lt_matrix}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Partitioned matrices ==&lt;br /&gt;
&lt;br /&gt;
Sometimes, especially with big matrices, it is useful to organise the elements of the matrix into components which are themselves matrices, for example: &amp;lt;math&amp;gt;B=\left[\begin{array}{rrrr}&lt;br /&gt;
1 &amp;amp; 2 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
8 &amp;amp; 3 &amp;amp; 0 &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 7 &amp;amp; 4\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 6 &amp;amp; 5&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; Here it would be reasonable to write: &amp;lt;math&amp;gt;B=\left[\begin{array}{cc}&lt;br /&gt;
B_{11} &amp;amp; 0\\&lt;br /&gt;
0 &amp;amp; B_{22}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;B_{ii},i=1,2,&amp;lt;/math&amp;gt; represent &amp;lt;math&amp;gt;2\times2&amp;lt;/math&amp;gt; matrices. &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; is an example of a &amp;#039;&amp;#039;partitioned matrix&amp;#039;&amp;#039;: that is, an &amp;lt;math&amp;gt;m\times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; say: &amp;lt;math&amp;gt;A=\left\Vert a_{ij}\right\Vert ,&amp;lt;/math&amp;gt; where the elements of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; are organised into &amp;#039;&amp;#039;sub-matrices&amp;#039;&amp;#039;. An example might be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right],\label{eq:partition_a}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the &amp;#039;&amp;#039;sub - matrices&amp;#039;&amp;#039; in the first row block have &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; rows, and therefore &amp;lt;math&amp;gt;m-r&amp;lt;/math&amp;gt; rows in the second row block. The column blocks might be defined by (for example) 3 columns in the first column block, 4 in the second and &amp;lt;math&amp;gt;n-7&amp;lt;/math&amp;gt; in the third column block.&lt;br /&gt;
&lt;br /&gt;
Another simple example might be: &amp;lt;math&amp;gt;A=\left[\begin{array}{rrr}&lt;br /&gt;
A_{1} &amp;amp; A_{2} &amp;amp; A_{3}\end{array}\right],\ \ \ \ \ \mathbf{x=}\left[\begin{array}{c}&lt;br /&gt;
\mathbf{x}_{1}\\&lt;br /&gt;
\mathbf{x}_{2}\\&lt;br /&gt;
\mathbf{x}_{3}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and therefore &amp;lt;math&amp;gt;A_{1},A_{2},A_{3}&amp;lt;/math&amp;gt; have &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows, &amp;lt;math&amp;gt;A_{1}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{2}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; columns, &amp;lt;math&amp;gt;A_{3}&amp;lt;/math&amp;gt; has &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; columns. The &amp;#039;&amp;#039;subvectors&amp;#039;&amp;#039; in &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; must have &amp;lt;math&amp;gt;n_{1},n_{2}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; rows respectively, for the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; to exist.&lt;br /&gt;
&lt;br /&gt;
Suppose that &amp;lt;math&amp;gt;n_{1}+n_{2}+n_{3}=n,&amp;lt;/math&amp;gt; so that &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;m\times n.&amp;lt;/math&amp;gt; The &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th element of &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; is: &amp;lt;math&amp;gt;\sum_{i=1}^{n}a_{ij}x_{j}&amp;lt;/math&amp;gt; but the summation can be broken up into the first &amp;lt;math&amp;gt;n_{1}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=1}^{n_{1}}a_{ij}x_{j},&amp;lt;/math&amp;gt; the next &amp;lt;math&amp;gt;n_{2}&amp;lt;/math&amp;gt; terms: &amp;lt;math&amp;gt;\sum_{i=n_{1}+1}^{n_{1}+n_{2}}a_{ij}x_{j},&amp;lt;/math&amp;gt; and the next &amp;lt;math&amp;gt;n_{3}&amp;lt;/math&amp;gt; terms; &amp;lt;math&amp;gt;\sum_{i=n_{1}+n_{2}+1}^{n}a_{ij}x_{j}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The point about the use of partitioned matrices is that the product &amp;lt;math&amp;gt;A\mathbf{x}&amp;lt;/math&amp;gt; can be represented as: &amp;lt;math&amp;gt;A\mathbf{x}=A_{1}\mathbf{x}_{1}+A_{2}\mathbf{x}_{2}+A\mathbf{x}_{3}&amp;lt;/math&amp;gt; by applying the across and down rule to the submatrices and the subvectors, a much simpler representation than the use of summations.&lt;br /&gt;
&lt;br /&gt;
Each of the components is a conformable matrix-vector product: this is essential in any use of partitioned matrices to represent some matrix product. For example, using &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; from equation ([eq:partition&amp;lt;sub&amp;gt;a&amp;lt;/sub&amp;gt;]) and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;B=\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; it is easy to write:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
AB &amp;amp; = &amp;amp; \left[\begin{array}{rrr}&lt;br /&gt;
A_{11} &amp;amp; A_{12} &amp;amp; A_{23}\\&lt;br /&gt;
A_{21} &amp;amp; A_{22} &amp;amp; A_{23}&lt;br /&gt;
\end{array}\right]\left[\begin{array}{c}&lt;br /&gt;
B_{11}\\&lt;br /&gt;
B_{21}\\&lt;br /&gt;
B_{31}&lt;br /&gt;
\end{array}\right]\\&lt;br /&gt;
 &amp;amp; = &amp;amp; \left[\begin{array}{r}&lt;br /&gt;
A_{11}B_{11}+A_{12}B_{21}+A_{13}B_{31}\\&lt;br /&gt;
A_{21}B_{11}+A_{22}B_{21}+A_{23}B_{31}&lt;br /&gt;
\end{array}\right].\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
But, what are the row dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt; What are the possible column dimensions for the submatrices in &amp;lt;math&amp;gt;B?&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Matrices, vectors and econometrics =&lt;br /&gt;
&lt;br /&gt;
The data on weights and heights for 12 students in the data matrix: &amp;lt;math&amp;gt;D=\left[\begin{array}{cc}&lt;br /&gt;
155 &amp;amp; 70\\&lt;br /&gt;
150 &amp;amp; 63\\&lt;br /&gt;
180 &amp;amp; 72\\&lt;br /&gt;
135 &amp;amp; 60\\&lt;br /&gt;
156 &amp;amp; 66\\&lt;br /&gt;
168 &amp;amp; 70\\&lt;br /&gt;
178 &amp;amp; 74\\&lt;br /&gt;
160 &amp;amp; 65\\&lt;br /&gt;
132 &amp;amp; 62\\&lt;br /&gt;
145 &amp;amp; 67\\&lt;br /&gt;
139 &amp;amp; 65\\&lt;br /&gt;
152 &amp;amp; 68&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; would seem to be ideally suited for fitting a two variable regression model: &amp;lt;math&amp;gt;y_{i}=\alpha+\beta x_{i}+u_{i},\;\;\;\;\; i=1,...,12.&amp;lt;/math&amp;gt; Here, the first column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the weight data, the data on the dependent variable &amp;lt;math&amp;gt;y_{i},&amp;lt;/math&amp;gt; and so should be labelled &amp;lt;math&amp;gt;\mathbf{y.}&amp;lt;/math&amp;gt; The second column of &amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt; contains all the data on the explanatory variable height, in the vector &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; say, so that: &amp;lt;math&amp;gt;D=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{y} &amp;amp; \mathbf{x}\end{array}\right].&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we define a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector with every element &amp;lt;math&amp;gt;1&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt;\mathbf{1}_{12}=\left[\begin{array}{c}&lt;br /&gt;
1\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
1&lt;br /&gt;
\end{array}\right],&amp;lt;/math&amp;gt; and a &amp;lt;math&amp;gt;12\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\mathbf{u}&amp;lt;/math&amp;gt; to contain the error terms; &amp;lt;math&amp;gt;\mathbf{u}=\left[\begin{array}{c}&lt;br /&gt;
u_{1}\\&lt;br /&gt;
\vdots\\&lt;br /&gt;
u_{12}&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; the regression model can be written in terms of the three data vectors &amp;lt;math&amp;gt;\mathbf{y,1}_{12}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\mathbf{y}=\mathbf{1}_{12}\alpha+\mathbf{x}\beta+\mathbf{u.}&amp;lt;/math&amp;gt; To see this, think of the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th elements of the vectors on the left and right hand sides.&lt;br /&gt;
&lt;br /&gt;
The standard next step is then to combine the data vectors for the explanatory variables into a matrix: &amp;lt;math&amp;gt;X=\left[\begin{array}{rr}&lt;br /&gt;
\mathbf{1}_{12} &amp;amp; \mathbf{x}\end{array}\right],&amp;lt;/math&amp;gt; and then define a &amp;lt;math&amp;gt;2\times1&amp;lt;/math&amp;gt; vector &amp;lt;math&amp;gt;\boldsymbol{\delta}&amp;lt;/math&amp;gt; to contain the parameters &amp;lt;math&amp;gt;\alpha,\beta&amp;lt;/math&amp;gt; as: &amp;lt;math&amp;gt;\boldsymbol{\delta}=\left[\begin{array}{r}&lt;br /&gt;
\alpha\\&lt;br /&gt;
\beta&lt;br /&gt;
\end{array}\right]&amp;lt;/math&amp;gt; to give the data matrix representation of the regression model as: &amp;lt;math&amp;gt;\mathbf{y}=X\boldsymbol{\delta}+\mathbf{u.}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the purposes of developing the theory of regression, this is the most convenient form of the regression model. It can represent regression models with any number of explanatory variables, and thus any number of parameters. The obvious point is that a knowledge of vector and matrix operations is needed to use and understand this form.&lt;br /&gt;
&lt;br /&gt;
We shall see later that there are two particular matrix and vector quantities associated with a regression model. The first is the matrix &amp;lt;math&amp;gt;X^{T}X,&amp;lt;/math&amp;gt; and the second the vector &amp;lt;math&amp;gt;X^{T}\mathbf{y.}&amp;lt;/math&amp;gt; The following Matlab code snippet provides the numerical values of these quantities for the weight data:&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; dset = load(’weights.mat’); &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xtx = dset.X’ * dset.X; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; xty = dset.X’ * dset.y; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xtx) &lt;br /&gt;
&lt;br /&gt;
 12     802&lt;br /&gt;
&lt;br /&gt;
802   53792&lt;br /&gt;
&lt;br /&gt;
&amp;amp;gt;&amp;amp;gt; disp(xty)&lt;br /&gt;
&lt;br /&gt;
  1850&lt;br /&gt;
&lt;br /&gt;
124528&lt;br /&gt;
&lt;br /&gt;
Hand calculation is of course possible, but not recommended.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3021</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3021"/>
				<updated>2013-09-10T12:55:30Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. The pdf files containing this material are &lt;br /&gt;
&lt;br /&gt;
[[Media:Lecture 2.pdf]]&lt;br /&gt;
&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
[[Media:L2_slide_ho.pdf]]&lt;br /&gt;
&lt;br /&gt;
respectively.&lt;br /&gt;
&lt;br /&gt;
Trial for latex import:&lt;br /&gt;
&lt;br /&gt;
[[LNotes|Lnotes]]&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. The link to this material is&lt;br /&gt;
&lt;br /&gt;
[[Media:Xs2.pdf]]&lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material, and test their understanding using Maple TA.&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester] http://place36.placementtester.com/manchester&lt;br /&gt;
&lt;br /&gt;
Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments: there is usually a delay whilst they are loaded. You can click on the assignment you want to do - the notation follows that in the exercise sheet. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of each of the question groups. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+, -, *, /, ^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;1/x - 1&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in the Exercise Sheet.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be too hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3020</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3020"/>
				<updated>2013-09-10T12:28:53Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. The pdf files containing this material are &lt;br /&gt;
&lt;br /&gt;
[[Media:Lecture 2.pdf]]&lt;br /&gt;
&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
[[Media:L2_slide_ho.pdf]]&lt;br /&gt;
&lt;br /&gt;
respectively.&lt;br /&gt;
&lt;br /&gt;
Trial for latex import:&lt;br /&gt;
&lt;br /&gt;
[[Math|Lnotes]]&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. The link to this material is&lt;br /&gt;
&lt;br /&gt;
[[Media:Xs2.pdf]]&lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material, and test their understanding using Maple TA.&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester] http://place36.placementtester.com/manchester&lt;br /&gt;
&lt;br /&gt;
Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments: there is usually a delay whilst they are loaded. You can click on the assignment you want to do - the notation follows that in the exercise sheet. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of each of the question groups. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+, -, *, /, ^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;1/x - 1&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in the Exercise Sheet.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be too hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3019</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3019"/>
				<updated>2013-09-10T12:28:29Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. The pdf files containing this material are &lt;br /&gt;
&lt;br /&gt;
[[Media:Lecture 2.pdf]]&lt;br /&gt;
&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
[[Media:L2_slide_ho.pdf]]&lt;br /&gt;
&lt;br /&gt;
respectively.&lt;br /&gt;
&lt;br /&gt;
Trial for latex import:&lt;br /&gt;
&lt;br /&gt;
[[Math|Math]]&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. The link to this material is&lt;br /&gt;
&lt;br /&gt;
[[Media:Xs2.pdf]]&lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material, and test their understanding using Maple TA.&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester] http://place36.placementtester.com/manchester&lt;br /&gt;
&lt;br /&gt;
Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments: there is usually a delay whilst they are loaded. You can click on the assignment you want to do - the notation follows that in the exercise sheet. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of each of the question groups. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+, -, *, /, ^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;1/x - 1&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in the Exercise Sheet.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be too hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Math&amp;diff=3018</id>
		<title>Math</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Math&amp;diff=3018"/>
				<updated>2013-09-10T11:01:00Z</updated>
		
		<summary type="html">&lt;p&gt;LG: Created page with &amp;quot;  = Introduction =  In this Section we will demonstrate how to use instrumental variables (IV) estimation to estimate the parameters in a linear regression model. The material...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
In this Section we will demonstrate how to use instrumental variables (IV) estimation to estimate the parameters in a linear regression model. The material will follow the notation in the Heij &amp;#039;&amp;#039;et al.&amp;#039;&amp;#039; textbook&amp;lt;ref&amp;gt;Heij C, de Boer P., Franses P.H., Kloek T. and van Dijk H.K (2004) Econometric Methods with Applications in Business and Economics, Oxford University Press, New York [http://www.amazon.co.uk/Econometric-Methods-Applications-Business-Economics/dp/0199268010/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1354473313&amp;amp;sr=1-1]. This is an all-round good textbook that presents econometrics using matrix algebra.&lt;br /&gt;
&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\mathbf{y}=\mathbf{X\beta }+\mathbf{\varepsilon }&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The issue is that we may suspect (or know) that the explanatory variable is correlated with the (unobserved) error term&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p\lim \left( \frac{1}{n}\mathbf{X}^{\prime }\mathbf{\varepsilon }\right) \neq 0.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reasons for such a situation include measurement error in &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;, endogenous explanatory variables, omitted relevant variables or a combination of the above. The consequence is that the OLS parameter estimate of &amp;lt;math&amp;gt;\mathbf{\beta}&amp;lt;/math&amp;gt; is biased and inconsistent. Fortunately it is well established that an IV estimation of &amp;lt;math&amp;gt;\mathbf{\beta}&amp;lt;/math&amp;gt; can potentially deliver consistent parameter estimates. This does, however, require the availability of sufficient instruments &amp;lt;math&amp;gt;\mathbf{Z}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Before continuing it is advisable to be clear about the dimensions of certain variables. Let’s assume that &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;(n \times 1)&amp;lt;/math&amp;gt; vector containing the &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; observations for the dependent variable. &amp;lt;math&amp;gt;\mathbf{X}&amp;lt;/math&amp;gt; is a &amp;lt;math&amp;gt;(n \times k)&amp;lt;/math&amp;gt; matrix with the &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; explanatory variables in the columns, usually containing a vector of 1s in the first column, representing a regression constant. Now, let &amp;lt;math&amp;gt;\mathbf{Z}&amp;lt;/math&amp;gt; be a &amp;lt;math&amp;gt;(n \times p)&amp;lt;/math&amp;gt; matrix with instruments. Importantly, &amp;lt;math&amp;gt;p \ge k&amp;lt;/math&amp;gt;, and further &amp;lt;math&amp;gt;\mathbf{X}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathbf{Z}&amp;lt;/math&amp;gt; may have columns in common. If so, these are explanatory variables from &amp;lt;math&amp;gt;\mathbf{X}&amp;lt;/math&amp;gt; that are judged to be certainly uncorrelated with the error term (like the constant).&lt;br /&gt;
&lt;br /&gt;
It is well established that the instrumental variables in &amp;lt;math&amp;gt;\mathbf{Z}&amp;lt;/math&amp;gt; need to meet certain restrictions in order to deliver useful IV estimators of &amp;lt;math&amp;gt;\mathbf{\beta}&amp;lt;/math&amp;gt;. They need to be uncorrelated to the error terms and they need to be correlated with the explanatory variables in &amp;lt;math&amp;gt;\mathbf{X}&amp;lt;/math&amp;gt; that are deemed to be endogenous (related to the error term). Further they should have no relevance for the dependent variable, other than through its relation to the potentially endogenous variable (exclusion assumption).&lt;br /&gt;
&lt;br /&gt;
A number of MATLAB functions can be found [[ExampleCodeIV|here]].&lt;br /&gt;
&lt;br /&gt;
= IV estimator =&lt;br /&gt;
&lt;br /&gt;
It is well established that the IV estimator can be estimated as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\mathbf{\widehat{\beta}}_{IV} = \left(\mathbf{X}&amp;#039;\mathbf{P}_Z \mathbf{X}\right)^{-1} \mathbf{X}&amp;#039;\mathbf{P}_Z \mathbf{y}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathbf{P}_Z&amp;lt;/math&amp;gt; is the projection matrix of &amp;lt;math&amp;gt;\mathbf{Z}&amp;lt;/math&amp;gt;. When performing inference the Variance-Covariance matrix of &amp;lt;math&amp;gt;\mathbf{\widehat{\beta}}_{IV}&amp;lt;/math&amp;gt; is of obvious interest and it is calculated as follows&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;Var\left(\mathbf{\widehat{\beta}}_{IV} \right) =  \sigma ^{2}\left( \mathbf{X}^{\prime }\mathbf{P}_{Z}\mathbf{X}\right)^{-1}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where the estimate for the error variance comes from&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
s_{IV}^{2} &amp;amp;=&amp;amp;\frac{1}{n-k}\widehat{\mathbf{\varepsilon }}_{IV}^{\prime }\widehat{\mathbf{\varepsilon }}_{IV} \\&lt;br /&gt;
&amp;amp;=&amp;amp;\frac{1}{n-k}\left( \mathbf{y-X}\widehat{\mathbf{\beta }}_{IV}\right)&lt;br /&gt;
^{\prime }\left( \mathbf{y-X}\widehat{\mathbf{\beta }}_{IV}\right)\end{aligned}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== MATLAB implementation ==&lt;br /&gt;
&lt;br /&gt;
The following code extract assumes that the vector &amp;lt;code&amp;gt;y&amp;lt;/code&amp;gt; contains the &amp;lt;math&amp;gt;(n \times 1)&amp;lt;/math&amp;gt; vector with the dependent variable, the &amp;lt;math&amp;gt;(n \times k)&amp;lt;/math&amp;gt; matrix &amp;lt;code&amp;gt;x&amp;lt;/code&amp;gt; contains all explanatory variables and &amp;lt;code&amp;gt;z&amp;lt;/code&amp;gt; is a &amp;lt;math&amp;gt;(n \times p)&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;(p ge k)&amp;lt;/math&amp;gt; with instruments.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;pz     = z*inv(z&amp;#039;*z)*z&amp;#039;;    % Projection matrix&lt;br /&gt;
xpzxi  = inv(x&amp;#039;*pz*x);      % this is also (Xhat&amp;#039;Xhat)^(-1)&lt;br /&gt;
&lt;br /&gt;
biv    = xpzxi*x&amp;#039;*pz*y;     % IV estimate&lt;br /&gt;
res    = y - x*biv;         % IV residuals&lt;br /&gt;
ssq    = res&amp;#039;*res/(n-k);    % Sample variance for IV residuals&lt;br /&gt;
s      = sqrt(ssq);         % Sample Standard deviation for IV res&lt;br /&gt;
bse    = ssq*xpzxi;         % Variance covariance matrix for IV estimates&lt;br /&gt;
bse    = sqrt(diag(bse));   % Extract diagonal and take square root -&amp;gt; standard errors for IV estimators&amp;lt;/source&amp;gt;&lt;br /&gt;
= IV related Testing procedures =&lt;br /&gt;
&lt;br /&gt;
One feature of IV estimations is that in general it is an inferior estimator of &amp;lt;math&amp;gt;\mathbf{\beta}&amp;lt;/math&amp;gt; if all explanatory variables are exogenous. In that case, assuming that all other Gauss-Markov assumptions are met, the OLS estimator is the BLUE estimator. In other words, IV estimators have larger standard errors for the coefficient estimates. Therefore, one would really like to avoid having to rely on IV estimators, unless, of course, they are the only estimators that deliver consistent estimates.&lt;br /&gt;
&lt;br /&gt;
For this reason any application of IV, should be accompanied by evidence that establishes that it was necessary. Once that is established, one should also establish that the instruments chosen meet the necessary requirements (of being correlated with the endogenous variable and being exogenous to the regression error term).&lt;br /&gt;
&lt;br /&gt;
== Testing for exogeneity ==&lt;br /&gt;
&lt;br /&gt;
The null hypothesis to be tested here is whether&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;p\lim \left( \frac{1}{n}\mathbf{X}^{\prime }\mathbf{\varepsilon }\right) \neq 0.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and therefore whether an IV estimation is required or no. The procedure described is as in Heij &amp;#039;&amp;#039;et al.&amp;#039;&amp;#039;. It consists of the following three steps.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Estimate &amp;lt;math&amp;gt;\mathbf{y}=\mathbf{X\beta }+\mathbf{\varepsilon}&amp;lt;/math&amp;gt; by OLS and save the residuals &amp;lt;math&amp;gt;\widehat{\mathbf{\varepsilon}}&amp;lt;/math&amp;gt;.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Estimate&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\mathbf{x}_{j}=\mathbf{Z\gamma }_{j}\mathbf{+v}_{j}&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;by OLS for all &amp;lt;math&amp;gt;\widetilde{k}&amp;lt;/math&amp;gt; elements in &amp;lt;math&amp;gt;\mathbf{X}&amp;lt;/math&amp;gt; that are possibly endogenous and save &amp;lt;math&amp;gt;\widehat{\mathbf{v}}_{j}&amp;lt;/math&amp;gt;. Collect these in the &amp;lt;math&amp;gt;\left(&lt;br /&gt;
        n\times \widetilde{k}\right) &amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;\widehat{\mathbf{V}}&amp;lt;/math&amp;gt;.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Estimate the auxilliary regression&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\widehat{\mathbf{\varepsilon }}=\mathbf{X\delta }_{0}+\widehat{\mathbf{V}}        \mathbf{\delta }_{1}+\mathbf{u}&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;and test the following hypothesis&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\begin{aligned}&lt;br /&gt;
        H_{0} &amp;amp;:&amp;amp;\mathbf{\delta }_{1}=0~~\mathbf{X}\text{ is exogenous} \\&lt;br /&gt;
        H_{A} &amp;amp;:&amp;amp;\mathbf{\delta }_{1}\neq 0~~\mathbf{X}\text{ is endogenous}&lt;br /&gt;
        \end{aligned}&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;using the usual test statistic &amp;lt;math&amp;gt;\chi ^{2}=nR^{2}&amp;lt;/math&amp;gt; which, under &amp;lt;math&amp;gt;H_{0}&amp;lt;/math&amp;gt;, is &amp;lt;math&amp;gt;&lt;br /&gt;
        \chi ^{2}\left( \widetilde{k}\right) &amp;lt;/math&amp;gt; distributed.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Implementing this test does not require anything else but the application of OLS regressions. In the following excerpt we assume that the dependent variable is contained in vector &amp;lt;code&amp;gt;y&amp;lt;/code&amp;gt;, the elements in &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; that are assumed to be exogenous are contained in &amp;lt;code&amp;gt;x1&amp;lt;/code&amp;gt;, those elements that are suspected that they may be endogenous are in &amp;lt;code&amp;gt;x2&amp;lt;/code&amp;gt; and the instrument matrix is saved in &amp;lt;code&amp;gt;z&amp;lt;/code&amp;gt;. As before, it is assumed that &amp;lt;code&amp;gt;z&amp;lt;/code&amp;gt; should contain all elements of &amp;lt;code&amp;gt;x1&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The code also uses the OLSest function for the step 3 regression. However, that could easily be circumvented as for the regressions in Step 1 and 2.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;x = [x1 x2];            % Combine to one matrix x&lt;br /&gt;
xxi   = inv(x&amp;#039;*x);&lt;br /&gt;
b     = xxi*x&amp;#039;*y;       % Step 1: OLS estimator&lt;br /&gt;
res   = y - x*b;        % Step 1: saved residuals&lt;br /&gt;
&lt;br /&gt;
zzi   = inv(z&amp;#039;*z);      % Step 2: inv(Z&amp;#039;Z) which is used in Step 2&lt;br /&gt;
gam   = zzi*z&amp;#039;*x2;      % Step 2: Estimate OLS coefficients of step 2 regressions&lt;br /&gt;
                        % This works even if we have more than one element in x2&lt;br /&gt;
                        % we get as many columns of gam as we have elements in x2&lt;br /&gt;
vhat = x2 - z*gam;      % Step 2: residuals (has as many columns as in x2&lt;br /&gt;
&lt;br /&gt;
[b,bse,res,n,rss,r2] = OLSest(res,[x vhat],0);  % Step 3 regression&lt;br /&gt;
teststat = size(res,1)*r2;                  % Step 3: Calculate nR^2 test stat&lt;br /&gt;
pval = 1 - chi2cdf(teststat,size(x2,2));    % Step 3: Calculate p-value&amp;lt;/source&amp;gt;&lt;br /&gt;
A function that implements this test can be found [[ExampleCodeIV#Hausmann|here]].&lt;br /&gt;
&lt;br /&gt;
== Sargan test for instrument validity ==&lt;br /&gt;
&lt;br /&gt;
One crucial property of instruments is that they ought to be uncorrelated to the regression error terms &amp;lt;math&amp;gt;\mathbf{\varepsilon}&amp;lt;/math&amp;gt;. Instrument endogeneity is set as the null hypothesis of this test with the alternative hypothesis being that the instruments are endogenous.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Estimate the regression model by IV and save &amp;lt;math&amp;gt;\widehat{\mathbf{\varepsilon }}_{IV}=\mathbf{y}-\mathbf{X}\widehat{\mathbf{\beta }}_{IV}&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Regress&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;lt;math&amp;gt;\widehat{\mathbf{\varepsilon }}_{IV}=\mathbf{Z\gamma +u}&amp;lt;/math&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Calculate &amp;lt;math&amp;gt;LM=nR^{2}&amp;lt;/math&amp;gt; from the auxilliary regresion in step 2. &amp;lt;math&amp;gt;LM&amp;lt;/math&amp;gt; is (under &amp;lt;math&amp;gt;H_{0}&amp;lt;/math&amp;gt;) &amp;lt;math&amp;gt;\chi ^{2}&amp;lt;/math&amp;gt; distributed with &amp;lt;math&amp;gt;\left( p-k\right) &amp;lt;/math&amp;gt; degrees of freedom.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The MATLAB implementation of this test relies on the availability of the IV parameter estimates. They can be calculated as indicated above. In [[ExampleCodeIV#IVest|this section]] you can find a function called &amp;lt;code&amp;gt;IVest&amp;lt;/code&amp;gt; that can deliver the required IV residuals by calling:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;[biv,bseiv,resiv,r2iv] = IVest(y,x,z);&amp;lt;/source&amp;gt;&lt;br /&gt;
The third output are the IV residuals (refer to [[ExampleCodeIV#IVest|IVest]] for details) which can then be used as the dependent variable in the second step regression:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;[b,bse,res,n,rss,r2] = OLSest(resiv,z,0);               % Step 2: calculate Step 2 regression&lt;br /&gt;
teststat = size(resiv,1)*r2;                            % Step 3: Calculates the nR^2 test statistic&lt;br /&gt;
pval = 1 - chi2cdf(teststat,(size(z,2)-size(x,2)));     % Step 3: Calculate p-value&amp;lt;/source&amp;gt;&lt;br /&gt;
It should be noted that this test is only applicable for an over-identified case when the &amp;lt;code&amp;gt;z&amp;lt;/code&amp;gt; contains more columns than &amp;lt;code&amp;gt;x&amp;lt;/code&amp;gt;. A function that implements this test can be found [[ExampleCodeIV#Sargan|here]].&lt;br /&gt;
&lt;br /&gt;
== Instrument relevance ==&lt;br /&gt;
&lt;br /&gt;
The last instrument property that is required is that the instruments are correlated to the potentially endogenous variables. This is tested using a standard OLS regression that uses the endogenous variables as the dependent variable and all instrument variables (i.e. &amp;lt;code&amp;gt;z&amp;lt;/code&amp;gt;) as the explanatory variables. We then need to check whether the restriction that all (non-constant) variables in &amp;lt;code&amp;gt;z&amp;lt;/code&amp;gt; are relevant (F-test). If they are relevant, then the instruments are relevant. This is, actually, exactly what the Step 2 regressions of the Hausmann test do.&lt;br /&gt;
&lt;br /&gt;
=Footnotes=&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3017</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3017"/>
				<updated>2013-09-10T10:48:21Z</updated>
		
		<summary type="html">&lt;p&gt;LG: /* Important Notice */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. The pdf files containing this material are &lt;br /&gt;
&lt;br /&gt;
[[Media:Lecture 2.pdf]]&lt;br /&gt;
[[Math|Math]]&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
[[Media:L2_slide_ho.pdf]]&lt;br /&gt;
&lt;br /&gt;
respectively.&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. The link to this material is&lt;br /&gt;
&lt;br /&gt;
[[Media:Xs2.pdf]]&lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material, and test their understanding using Maple TA.&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester] http://place36.placementtester.com/manchester&lt;br /&gt;
&lt;br /&gt;
Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments: there is usually a delay whilst they are loaded. You can click on the assignment you want to do - the notation follows that in the exercise sheet. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of each of the question groups. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+, -, *, /, ^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;1/x - 1&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in the Exercise Sheet.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be too hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3016</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3016"/>
				<updated>2013-09-10T10:45:56Z</updated>
		
		<summary type="html">&lt;p&gt;LG: /* Using Maple TA */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. The pdf files containing this material are &lt;br /&gt;
&lt;br /&gt;
[[Media:Lecture 2.pdf]]&lt;br /&gt;
&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
[[Media:L2_slide_ho.pdf]]&lt;br /&gt;
&lt;br /&gt;
respectively.&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. The link to this material is&lt;br /&gt;
&lt;br /&gt;
[[Media:Xs2.pdf]]&lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material, and test their understanding using Maple TA.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester] http://place36.placementtester.com/manchester&lt;br /&gt;
&lt;br /&gt;
Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments: there is usually a delay whilst they are loaded. You can click on the assignment you want to do - the notation follows that in the exercise sheet. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of each of the question groups. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+, -, *, /, ^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;1/x - 1&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in the Exercise Sheet.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be too hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3015</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3015"/>
				<updated>2013-09-10T10:45:29Z</updated>
		
		<summary type="html">&lt;p&gt;LG: /* Using Maple TA */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. The pdf files containing this material are &lt;br /&gt;
&lt;br /&gt;
[[Media:Lecture 2.pdf]]&lt;br /&gt;
&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
[[Media:L2_slide_ho.pdf]]&lt;br /&gt;
&lt;br /&gt;
respectively.&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. The link to this material is&lt;br /&gt;
&lt;br /&gt;
[[Media:Xs2.pdf]]&lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material, and test their understanding using Maple TA.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester] http://place36.placementtester.com/manchester&lt;br /&gt;
&lt;br /&gt;
Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments: there is usually a delay whilst they are loaded. You can click on the assignment you want to do - the notation follows that in the exercise sheet. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of each of the question groups. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+, -, *, /, ^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;$1/x - 1$&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in the Exercise Sheet.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be too hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:Lecture_2.pdf&amp;diff=3014</id>
		<title>File:Lecture 2.pdf</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:Lecture_2.pdf&amp;diff=3014"/>
				<updated>2013-09-09T14:13:46Z</updated>
		
		<summary type="html">&lt;p&gt;LG: LG uploaded a new version of &amp;amp;quot;File:Lecture 2.pdf&amp;amp;quot;: These notes cover the introductory matrix material, and should be studied before the lectures on ECON61001 start.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These notes cover the introductory matrix material, and should be studied before the lectures on ECON61001 start.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:Lecture_2.pdf&amp;diff=3013</id>
		<title>File:Lecture 2.pdf</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:Lecture_2.pdf&amp;diff=3013"/>
				<updated>2013-09-09T14:13:10Z</updated>
		
		<summary type="html">&lt;p&gt;LG: LG uploaded a new version of &amp;amp;quot;File:Lecture 2.pdf&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These notes cover the introductory matrix material, and should be studied before the lectures on ECON61001 start.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3012</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3012"/>
				<updated>2013-09-09T13:19:33Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. The pdf files containing this material are &lt;br /&gt;
&lt;br /&gt;
[[Media:Lecture 2.pdf]]&lt;br /&gt;
&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
[[Media:L2_slide_ho.pdf]]&lt;br /&gt;
&lt;br /&gt;
respectively.&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. The link to this material is&lt;br /&gt;
&lt;br /&gt;
[[Media:Xs2.pdf]]&lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material, and test their understanding using Maple TA.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester] http://place36.placementtester.com/manchester&lt;br /&gt;
&lt;br /&gt;
Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments: there is usually a delay whilst they are loaded. You can click on the assignment you want to do - the notation follows that in the exercise sheet. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of each of the question groups. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+, -, *, /, ^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;1/x - 1&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in the Exercise Sheet.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be too hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3011</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3011"/>
				<updated>2013-09-09T13:17:43Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. The pdf files containing this material are &lt;br /&gt;
&lt;br /&gt;
[[Media:Lecture 2.pdf]]&lt;br /&gt;
&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
[[Media:L2_slide_ho.pdf]]&lt;br /&gt;
&lt;br /&gt;
respectively.&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. The link to this material is&lt;br /&gt;
&lt;br /&gt;
[[Media:Xs2.pdf]]&lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material, and test their understanding using Maple TA.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester] http://place36.placementtester.com/manchester&lt;br /&gt;
&lt;br /&gt;
Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments: there is usually a delay whilst they are loaded. You can click on the assignment you want to do - the notation follows that in the exercise sheet. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of each of the question groups. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+,-,*./,^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;1/x - 1&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in a separate document, although the instructions are repeated in the Exercise Sheet.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be to hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3010</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3010"/>
				<updated>2013-09-09T13:16:36Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. The pdf files containing this material are &lt;br /&gt;
&lt;br /&gt;
[[Media:Lecture 2.pdf]]&lt;br /&gt;
&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
[[Media:L2_slide_ho.pdf]]&lt;br /&gt;
&lt;br /&gt;
respectively.&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. The link to this material is&lt;br /&gt;
&lt;br /&gt;
[[Media:Xs2.pdf]]&lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material, and test their understanding using Maple TA.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester] http://place36.placementtester.com/manchester&lt;br /&gt;
&lt;br /&gt;
Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments. You can click on the assignment you want to do - the notation follows that in the exercise sheet. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of each of the question groups. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+,-,*./,^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;1/x - 1&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in a separate document, although the instructions are repeated in the Exercise Sheet.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be to hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3009</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3009"/>
				<updated>2013-09-09T13:15:50Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. The pdf files containing this material are &lt;br /&gt;
&lt;br /&gt;
[[Media:Lecture 2.pdf]]&lt;br /&gt;
&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
[[Media:L2_slide_ho.pdf]]&lt;br /&gt;
&lt;br /&gt;
respectively.&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. The link to this material is&lt;br /&gt;
&lt;br /&gt;
[[Media:Xs2.pdf]]&lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester] http://place36.placementtester.com/manchester&lt;br /&gt;
&lt;br /&gt;
Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments. You can click on the assignment you want to do - the notation follows that in the exercise sheet. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of each of the question groups. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+,-,*./,^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;1/x - 1&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in a separate document, although the instructions are repeated in the Exercise Sheet.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be to hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3008</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3008"/>
				<updated>2013-09-09T13:15:17Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. The pdf files containing this material are &lt;br /&gt;
&lt;br /&gt;
[[Media:Lecture2.pdf]]&lt;br /&gt;
&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
[[Media:L2_slide_ho.pdf]]&lt;br /&gt;
&lt;br /&gt;
respectively.&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. The link to this material is&lt;br /&gt;
&lt;br /&gt;
[[Media:Xs2.pdf]]&lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester] http://place36.placementtester.com/manchester&lt;br /&gt;
&lt;br /&gt;
Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments. You can click on the assignment you want to do - the notation follows that in the exercise sheet. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of each of the question groups. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+,-,*./,^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;1/x - 1&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in a separate document, although the instructions are repeated in the Exercise Sheet.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be to hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3007</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3007"/>
				<updated>2013-09-09T13:14:35Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. The pdf files containing this material are &lt;br /&gt;
&lt;br /&gt;
[[Media:lecture2.pdf]]&lt;br /&gt;
&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
[[Media:L2_slide_ho.pdf]]&lt;br /&gt;
&lt;br /&gt;
respectively.&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. The link to this material is&lt;br /&gt;
&lt;br /&gt;
[[Media:Xs2.pdf]]&lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester] http://place36.placementtester.com/manchester&lt;br /&gt;
&lt;br /&gt;
Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments. You can click on the assignment you want to do - the notation follows that in the exercise sheet. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of each of the question groups. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+,-,*./,^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;1/x - 1&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in a separate document, although the instructions are repeated in the Exercise Sheet.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be to hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3006</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3006"/>
				<updated>2013-09-09T13:14:08Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. The pdf files containing this material are &lt;br /&gt;
&lt;br /&gt;
[[Media:lecture2.pdf]]&lt;br /&gt;
&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
[[Media:L2_slide_ho,pdf]]&lt;br /&gt;
&lt;br /&gt;
respectively.&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. The link to this material is&lt;br /&gt;
&lt;br /&gt;
[[Media:Xs2.pdf]]&lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester] http://place36.placementtester.com/manchester&lt;br /&gt;
&lt;br /&gt;
Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments. You can click on the assignment you want to do - the notation follows that in the exercise sheet. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of each of the question groups. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+,-,*./,^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;1/x - 1&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in a separate document, although the instructions are repeated in the Exercise Sheet.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be to hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3005</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3005"/>
				<updated>2013-09-09T13:05:31Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. The pdf files containing this materail are &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. &lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. In this module, the questions on the paper Exercise Sheets are also available as Maple TA assignments. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester]&lt;br /&gt;
&lt;br /&gt;
and this link is also given in each Lecture folder, for convenience. Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments. You can click on the assignment you want to do - the notation follows that in the exercise sheets. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of the question group. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+,-,*./,^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;1/x - 1&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in a separate document, although the instructions are often repeated in questions.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be to hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:Xs2.pdf&amp;diff=3004</id>
		<title>File:Xs2.pdf</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:Xs2.pdf&amp;diff=3004"/>
				<updated>2013-09-09T13:03:55Z</updated>
		
		<summary type="html">&lt;p&gt;LG: Exercise questions for Lecture 2 of ECON61001 - they cover the introductory matrix material. You should answer these questions using Maple TA - as soon as you have answered a question, you can grade it straight away. This will show whether your answer ...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Exercise questions for Lecture 2 of ECON61001 - they cover the introductory matrix material. You should answer these questions using Maple TA - as soon as you have answered a question, you can grade it straight away. This will show whether your answer is correct, and also provide a sketch answer.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:L2_slide_ho.pdf&amp;diff=3003</id>
		<title>File:L2 slide ho.pdf</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:L2_slide_ho.pdf&amp;diff=3003"/>
				<updated>2013-09-09T13:02:18Z</updated>
		
		<summary type="html">&lt;p&gt;LG: Presentation slides for the introductory matrix material for ECON61001.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Presentation slides for the introductory matrix material for ECON61001.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=File:Lecture_2.pdf&amp;diff=3002</id>
		<title>File:Lecture 2.pdf</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=File:Lecture_2.pdf&amp;diff=3002"/>
				<updated>2013-09-09T13:01:30Z</updated>
		
		<summary type="html">&lt;p&gt;LG: These notes cover the introductory matrix material, and should be studied before the lectures on ECON61001 start.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These notes cover the introductory matrix material, and should be studied before the lectures on ECON61001 start.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	<entry>
		<id>http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3001</id>
		<title>Maths</title>
		<link rel="alternate" type="text/html" href="http://eclr.humanities.manchester.ac.uk/index.php?title=Maths&amp;diff=3001"/>
				<updated>2013-09-09T12:59:46Z</updated>
		
		<summary type="html">&lt;p&gt;LG: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Important Notice ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As preparation for the lectures in ECON61001 Econometric Methods, MSc students are expected to read and understand the material in Lecture Notes 2 (Matrices - 1) or in Lecture Slides 2, as part of Econ60901 PreSession Maths. (Give pdf links?)&lt;br /&gt;
&lt;br /&gt;
Students are also expected to tackle the corresponding questions on Econ61001 Exercise Sheet 2, either on paper or online using Maple TA, as part of the PreSession Maths course. &lt;br /&gt;
&lt;br /&gt;
Students on the MA program are welcome to try out this material.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using Maple TA ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Maple T.A. is an easy-to-use web-based system for creating tests and assignments, and automatically assessing student responses and performance. In this module, the questions on the paper Exercise Sheets are also available as Maple TA assignments. The idea is that you can answer the questions at your leisure, and have them correctly graded by Maple TA. Once you have answered a question, the correct answers and/or sketch answers are immediately available. You can go back and attempt the same question as many times as you like.&lt;br /&gt;
&lt;br /&gt;
Maple TA is located at&lt;br /&gt;
&lt;br /&gt;
[http://place36.placementtester.com/manchester]&lt;br /&gt;
&lt;br /&gt;
and this link is also given in each Lecture folder, for convenience. Login with your registration number (first 7 digits only): the password is also your registration number. On the page that follows, you can click on MyProfile and then Password Update to change your password. &lt;br /&gt;
&lt;br /&gt;
You should select the course&lt;br /&gt;
&lt;br /&gt;
ECON60901 PreSession Maths&lt;br /&gt;
&lt;br /&gt;
by clicking on the entry for this course. This will bring up a page of assignments. You can click on the assignment you want to do - the notation follows that in the exercise sheets. The assignments are organised as question groups, ExSheet 2 or ExSheet 2 Randomised Questions, or by individual question - a component of the question group. Picking a question group means that you have to answer all the questions in that group before you can grade your answers. Picking an individual question enables you to grade your answers immediately.&lt;br /&gt;
&lt;br /&gt;
When you click an an assignment, you are given choice between &amp;quot;Print assignment for off-line work&amp;quot; or &amp;quot;Work assignment on-line right now&amp;quot;. If you choose to print, wait for the questions to be printed. When you have answered the questions, you can login and click on the assignment again, and choose the &amp;quot;Work ... online&amp;quot; option to enter your answers.&lt;br /&gt;
&lt;br /&gt;
Usually, you are given information on the type of response (number, formula etc) you are expected to give. If not, a textual response is required. In general, it is better to show arithmetic operators (+,-,*./,^) explicitly in your answers. Use of brackets to make clear meaning is also encouraged: what exactly is meant by &amp;quot;1/x - 1&amp;quot; - is it (1/x) - 1 or 1/(x - 1)? Additional information about the entry of vectors and matrices in your answers is given in a separate document, although the instructions are often repeated in questions.&lt;br /&gt;
&lt;br /&gt;
When you have finished one page of questions, click Next to go to the next part of the main question. You can also use the drop down menu of the Question item. When you have finished, click Grade and view details to see the marked version of your answers. This screen also contains a Comments section, which gives sketch answers. You can also click on Quit and Save, or on Print.&lt;br /&gt;
&lt;br /&gt;
You can save your work and return to it later if you clicked on Quit and Save when doing the assignment, but before clicking on Grade. To return, simply find the assignment in the Class Homepage list, and click on it.&lt;br /&gt;
&lt;br /&gt;
To inspect completed and marked assignments, start from the Assignments page by clicking on Class Homepage if necessary. Click on Gradebook, and select View Past Results. Select the assignment you want to inspect, click on Search, find the assignment in the list at the bottom of the page, and click on Details.&lt;br /&gt;
&lt;br /&gt;
The questions on ExSheet 2 Randomised Questions are randomised in the sense that Maple TA generates the numbers, which are different every time the question is attempted. These questions are intended for additional practice, should this be required, or for revision. These &amp;quot;randomised&amp;quot; questions are sometimes easier than, and sometimes harder than, the corresponding Exercise Sheet questions. If you find one of these randomised questions to be to hard, simply click on the &amp;quot;Refresh&amp;quot; button at the top of the page to get another question.&lt;/div&gt;</summary>
		<author><name>LG</name></author>	</entry>

	</feed>