# Homework 1, due in class 2016-01-29ΒΆ

With the exception of the computational assignments this should be repetition from MATH 514.

**Basics**

- Define: Matrix product, Hardamard product, (non)-commuting matrices, transpose, symmetric matrix, skew-symmetric matrix, normal matrix, inner product of two vectors, Euclidian length of a vector, outer product of two vectors.
- What are Hermitian and unitary matrices?

**Vector spaces** We are concerned with the vector spaces \(\mathbb{R}^n\) and \(\mathbb{C}^n\).

- Define: Linearly independent, basis, standard basis, coordinate or coordinates with respect to a basis, linear transformation.
- What is the rank of a square and non-square matrix?
- Define the inverse of a matrix.

**Orthogonality**

- What does it mean that two vectors are orthogonal? Orthonormal?
- Let \(S\) be a subspace of \(\mathbb{R}^n\). Define its orthogonal complement.
- What properties does an orthogonal matrix have?

**Block Matrices**

What is a submatrix?

What is a (leading) principal submatrix?

Consider the linear system \(Mx=b\) written in \(2 \times 2\) form

\[\begin{eqnarray} A x_1 + B x_2 &=& b_1, \\ C x_1 + D x_2 &=& b_2. \end{eqnarray}\]Assuming \(A\) is nonsingular find the matrices \(S\) and \(G\) defined by \(S x_2 = b_2 + Gb_1\). Write the elimination step as a multiplication by a block lower triangular matrix to find a block factorization of \(M\) into a product of a block lower triangular matrix with identity matrices on the diagonal and a block upper triangular matrix.

Use \(M^{-1} = (LU)^{-1} = U^{-1} L^{-1}\) to find a formula for \(M^{-1}\).

Assume that \(D\) is nonsingular and perform block Gaussian elimination in the reverse order to find the factorization of \(M\) consisting of the product of a block upper triangular matrix with identity matrices on the diagonal and a block lower triangular matrix.

Repeat as above to get an alternative expression for \(M^{-1}\).

Compare the (1,1) elements of the two formulas for \(M^{-1}\) to make sure you just proved the

Woodbury formula\[\begin{eqnarray} (A-BD^{-1}C)^{-1} = A^{-1} + A^{-1} B(D-C A^{-1} B)^{-1} C A^{-1}. \end{eqnarray}\]When is this formula useful (efficient)?

Consider the special case when \(D = \sigma\) is a scalar and prove the formula in that case by direct computation.

Suppose you have solved \(Ax = b\) by the \(LU\) decomposition of \(A\) and now you would like to find the solution to the modified problem \((A-\sigma^{-1}uv^T)\hat{x}=b\). Show how you can get the solution by reusing the already computed factorization.

**Determinants**

Let \(A\) be an \(n \times n\) matrix with columns \(a_1, a_2, \ldots, a_n\). Prove that

\[\begin{eqnarray} | \det (A) | \le \prod_{j=1}^{n} \| a_j \|_2. \end{eqnarray}\]

**Norms**

Let \(p>1\) and \(1/p + 1/q = 1\). Show that

\[\begin{eqnarray} \alpha \beta \leq \frac{\alpha^p}{p} + \frac{\beta^q}{q}. \end{eqnarray}\]Use the result to prove that the triangle inequality holds for the \(p\)-norm of a vector \(x\)

\[\begin{eqnarray} \| x \|_p = (\sum_{i=1}^n | x_i |^p)^{1/p}. \end{eqnarray}\]In \(\mathbb{C}^n\) all vector norms are equivalent, i.e. for two norms \(\| \cdot \|\) and \(\| \cdot \|^\prime\) there are two positive constants \(c\) and \(c^\prime\) such that

\[\begin{eqnarray} \frac{1}{c} \| x \|^\prime \leq \| x \| \leq c^\prime \| x \|^\prime, \ \ \ \ \forall x \in \mathbb{C}^n. \end{eqnarray}\]What are these constants for \(p = 1, 2, \infty\). What are the vectors \(x\) that makes the inequalities to equalities?

Let \(\| A \|_p = \sup_{x \neq 0} \frac{\|A x\|_p}{ \|x \|_p}\) be the subordinate (or operator) norm to the vector \(p\) norm. For \(p=2\) we have that \(\| A \|_2 = \sigma_1\), the largest singular value. This norm is expensive to compute so it can be good to know the estimate

\[\begin{eqnarray} \| A \|_2 \leq \left( \| A \|_\infty \| A \|_1 \right)^{1/2}. \end{eqnarray}\]Use the fact that all subordinate \(p\)-norms are compatible to show that for any of \(A\)‘s eigenvalues \(\lambda\) it holds that \(|\lambda| \leq \| A \|_p\). Then use that the subordinate \(p\)-norms are also submultiplicative to obtain the above estimate.

**Various**

Let \(S \in \mathbb{R}^{n \times n}\) be skew-symmetric. Show that \(I-S\) is nonsingular and that

\[\begin{eqnarray} Q = (I-S)^{-1}(I+S), \end{eqnarray}\]is orthogonal.

Find \(Q\) if

\[\begin{gather} S = \left( \begin{array}{cc} 0 & \tan (\theta /2) \\ -\tan (\theta/2) & 0 \end{array} \right). \end{gather}\]

**Computations** Warmup.

- Write a program that computes the determinant of a matrix using the
cofactor expansionand tabulate the time it takes for matrices of size 2,3,4,... (make sure you average your timing over many runs for the smaller sizes). How does the compute time scale with the size of the matrix?- Compare the timing with the timing of the LAPACK routine
`DGETRF`

.- A preview of backward and forward errors. Write a program that solves the 2x2 system \(A\hat{x}=b\) using Cramer’s rule and Gauss elimination with partial pivoting (simply use
`xGE =A\b`

in Matlab). Let the elements of \(A\) and \(x\) be normally distributed and compute`b = A*x`

and solve using GEPP and Cramer’s rule. Plot (in log-log scale) the relative errors in the solutions as a function of the condition number of the matrix`cond(A)`

. Also plot the relative residuals, i.e.`rrGE = norm(b-A*xGE)/norm(A)/norm(xGE)`

as a function of`cond(A)`

. Do around 100,000 realizations. What does the norm of the error and the norm of the residual tell you?- A definition of Eulers number is \(e = \lim_{n \rightarrow \infty} (1+1/n)^n\). Use \(n = 8^p, 10^p, p = 1,\ldots,17\) and compute approximations to \(e\), plot the error as a function of \(p\) (in log-log scale). If you have a pocket calculator try if it gives the same results. Can you explain the results?