# 17 In-Class Assignment: Decompositions and Gaussian Elimination¶

## Agenda for today’s class (80 minutes)¶

## 2. Decompositions¶

Animiated Image from Wikipedia: https://wikipedia.org/

In numerical linear algebra, we factorize matrices to facilitate efficient and/or accurate computations. There are many possible **matrix decompositions**. Some, e.g., the eigendecomposition, require the matrix to be square, while others, e.g., the \(QR\) factorization, exist for arbitrary matrices. Among all possible decompositions (also called *factorizations*), some common examples include:

**QR Factorization**from Gram-Schmidt orthogonization:\(A = QR\)

\(Q\) has orthonormal columns and \(R\) is a upper-triangular matrix

If there are zero rows in \(R\), we can reduce the number of columns in \(Q\)

Exists for arbitrary matrices

**LU / LDU Decomposition**from Gauss Elimination:\(A = LU\) or \(A = LDU\)

\(L\) is lower-triangular, \(U\) is upper-triangular, and \(D\) is diagonal

Exists for

**square**matricesIs

*related to Gaussian Elimination*

**Cholesky Decomposition**:\(A = R^TR\quad (= LDL^T)\)

\(R\) is upper-triangular

Factorization of \(A\) into \(R^TR\) requires \(A\) be

*symmetric*and*positive-definite*. The latter simply requires \(x^{T}Ax > 0\) for every \(x \in \mathbb{R}^n\). Note that \(x^{T}Ax\) is always a scalar value (e.g., note that \(x^TA = y^T\) for some vector \(y\in\mathbb{R}^n\), and \(y^Tx\) is the dot product between \(x\) and \(y\) and, hence, a real scalar).

**Schur Decomposition**:\(A = UTU^{T}\)

\(U\) is orthogonal and \(T\) is upper-triangular

Exists for every square matrix and says every such matrix, \(A\), is unitarily equivalent to an upper-triangular matrix, \(T\) (i.e., there exists an orthonomal basis with respect to which \(A\) is upper-triangular)

Eigenvalues on diagonal of \(T\)

**Singular Value Decomposition**:\(A = U\Sigma V^{T}\)

\(U\) is orthogonal, \(V\) is orthogonal, and \(\Sigma\) is diagonal

Exists for arbitrary matrices

**Eigenvalue Decomposition**:\(A = X\Lambda X^{-1}\)

\(X\) is invertible and \(\Lambda\) is diagonal

Exists for square matrices with linearly independent columns (e.g., full rank)

Also called the eigendecomposition

## 3. Focus on LU¶

In this assignment we will create algorithms that factorize invertible matrices (i.e., square matrices with linearly independent columns), \(A\), into the following decomposition (**note**: the terms *decomposition* and *factorization* are used interchangeably in literature):

LU Decomposition: \(A = LU\)

Each matrix in these decompositions is the *matrix product* of elementary matrices. Recall that *an elementary matrix differs from the identity matrix (the square matrix with \(1\)s on the diagonal and \(0\)s elsewhere) by an elementary row operation*.

The use of these matrix decompositions in Numerical Linear Algebra is motivated by computational efficiency or reduction of *computational complexity* (recall “**Big-O notation**” and **counting the loops in your matrix multiplication algorithm**) and *numerical stability*. Solving our old friend \(Ax = b\) by computing the inverse of \(A\), denoted \(A^{-1}\), and taking the matrix-vector product \(A^{-1}b = x\) is computational resource intensive and numerically unstable, in general. If the \(LU\) decomposition of \(A\) exists, then it will be less costly and more stable to:

Solve \(Ly = b\) for \(y\) by

*forward-substitution*; and thenSolve \(Ux = y\) for \(x\) by

*backward-substitution*

A final note to relate this assignment to the beginning of the course: The algorithms presented here are of a different class than the **Jacobi Algorithm** and **Gauss-Siedel Algorithm**. These are *iterative algorithms*. As you now know, this means that the algorithmic procedure is applied once, twice, and so on, until the output is within a desired tolerance, or until the process has been executed a given number of times (e.g., 100 iterations).

### Gaussian Elimination & LU Decomposition¶

Recall that Gaussian elimination converts an arbitrary matrix, \(A\), into its *row echelon form*. For our purposes, let’s suppose that \(A\) is a square matrix and, therefore, an \(n\times n\) matrix. To simplify our tasks, let us further impose the condition that \(A\) is invertible. Thus, the columns of \(A\) are linearly independent. This means that Gaussian elimination will yield an **upper-triangular***matrix*. Let us denote this matrix \(U\) for * upper-triangular*.

If there were a function, \(f\) that could take \(A\) and output \(U\), we could think of Gaussian Elimination as the following process:

With this information, we may now rewrite our equation from above as:

You may have noticed the superscript in \(L^{-1}\). This just says that \(L^{-1}\) is the inverse of some matrix \(L\). And for any invertible matrix, \(L\), we have that the matrix products:

This is analogous to (for every real number \(a\neq 0\)):

Using the rules of matrix multiplication, verify the formula above by computing the following:

To understand \(L^{-1}\) more deeply, let’s turn our attention back to Gaussian elimination for a moment. Take as a given that, for a “sufficiently nice” \(n\times n\) matrix \(A\), the matrix \(L^{-1}\) that takes \(A\) to its * upper-triangular* or

*, \(U\), has the structure:*

**row echelon form**Each of the \(L_{i}\)s above is an elementary matrix that zeros out the subdiagonal entries of the \(i^{th}\) column of \(A\). This is **the \(i^{th}\) step of** **Gaussian Elimination****applied to** * the entire \(i^{th}\) column of A below the \(i^{th}\) diagonal element*.

Let’s show this by computation of \(L_i\) for a “nice” matrix \(A\).

```
## Import all necessary packages
%matplotlib inline
import scipy.sparse as sparse #this helps to speed up the algorithms, but you will not use it.
import numpy as np
import matplotlib.pyplot as plt
import sympy as sym
sym.init_printing(use_unicode=True)
## These will allow us to see a cool simulation of the Heat Equation problem (if we compute our answers correctly!)
from matplotlib import animation, rc
from IPython.display import HTML
```

```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-d81848d03fa9> in <module>
1 ## Import all necessary packages
----> 2 get_ipython().run_line_magic('matplotlib', 'inline')
3 import scipy.sparse as sparse #this helps to speed up the algorithms, but you will not use it.
4 import numpy as np
5 import matplotlib.pyplot as plt
~/REPOS/MTH314_Textbook/MakeTextbook/envs/lib/python3.9/site-packages/IPython/core/interactiveshell.py in run_line_magic(self, magic_name, line, _stack_depth)
2342 kwargs['local_ns'] = self.get_local_scope(stack_depth)
2343 with self.builtin_trap:
-> 2344 result = fn(*args, **kwargs)
2345 return result
2346
~/REPOS/MTH314_Textbook/MakeTextbook/envs/lib/python3.9/site-packages/decorator.py in fun(*args, **kw)
230 if not kwsyntax:
231 args, kw = fix(args, kw, sig)
--> 232 return caller(func, *(extras + args), **kw)
233 fun.__name__ = func.__name__
234 fun.__doc__ = func.__doc__
~/REPOS/MTH314_Textbook/MakeTextbook/envs/lib/python3.9/site-packages/IPython/core/magic.py in <lambda>(f, *a, **k)
185 # but it's overkill for just that one bit of state.
186 def magic_deco(arg):
--> 187 call = lambda f, *a, **k: f(*a, **k)
188
189 if callable(arg):
~/REPOS/MTH314_Textbook/MakeTextbook/envs/lib/python3.9/site-packages/IPython/core/magics/pylab.py in matplotlib(self, line)
97 print("Available matplotlib backends: %s" % backends_list)
98 else:
---> 99 gui, backend = self.shell.enable_matplotlib(args.gui.lower() if isinstance(args.gui, str) else args.gui)
100 self._show_matplotlib_backend(args.gui, backend)
101
~/REPOS/MTH314_Textbook/MakeTextbook/envs/lib/python3.9/site-packages/IPython/core/interactiveshell.py in enable_matplotlib(self, gui)
3511 """
3512 from IPython.core import pylabtools as pt
-> 3513 gui, backend = pt.find_gui_and_backend(gui, self.pylab_gui_select)
3514
3515 if gui != 'inline':
~/REPOS/MTH314_Textbook/MakeTextbook/envs/lib/python3.9/site-packages/IPython/core/pylabtools.py in find_gui_and_backend(gui, gui_select)
278 """
279
--> 280 import matplotlib
281
282 if gui and gui != 'auto':
ModuleNotFoundError: No module named 'matplotlib'
```

### Gaussian Elimination by Elementary Matrices, \(L_i\)¶

Let \(A\) be the following matrix:

✅ **DO THIS:** Create a \(4 \times 4\) **unit lower-triangular** matrix, \(L_1\) that eliminates all of the subdiagonal entries of the first column of \(A\). Display the matrix \(L_1\) using SymPy.

```
A = np.matrix([[2,1,1,0],[4,3,3,1],[8,7,9,5],[6,7,9,8]]) # Here is A for your convenience
As = sym.Matrix(A)
As
```

```
## Type your answer here ##
L1 = np.matrix([[1,,,],[,1,,],[,,1,],[,,,1]])
```

We should now have the following:

Since our first row remained unchanged, we know that our \(u_{1i}\) (the first row entries of \(U\)) are now determined. Similarly, we have that the \(u_{2i}\) (the second row entries of \(U\)) are determined as well. The \(x\) elements are elements that have changed, but are not yet in their final form. **Note**: Your \(u_{ij}\) will be whole, or integer (\(\mathbb{Z}\)), numbers.

✅ **DO THIS:** Left-multiply \(A\) by \(L_1\) to confirm that all of the subdiagonal entries of the first column of \(A^{(1)}\) are zero. Display the result via SymPy.

```
## Type your answer here ##
```

Our next step will be to eliminate all nonzero entries from the second column of \(A^{(1)} = L_{1}A\) by left multiplication of \(L_{2}\). This should yield:

✅ **DO THIS:** Create a \(4 \times 4\) **unit lower-triangular** matrix, \(L_2\) that eliminates all of the subdiagonal entries of the second column of \(A^{(1)}\) yielding \(A^{(2)}\) as above. Display the matrix \(L_2\) using SymPy.

```
## Type your answer here ##
L2 = np.matrix([[1,,,],[,1,,],[,,1,],[,,,1]]) # for your convenience
```

✅ **DO THIS:** Left-multiply \(A^{(1)}\) by \(L_2\) to confirm that all of the subdiagonal entries of column 2 of \(A^{(2)}\) are zero. Display the result via SymPy. **Note**: Your \(u_{ij}\) will be whole, or Integer (\(\mathbb{Z}\)), numbers.

```
## Type your answer here ##
```

We should now have:

We now want to build the final matrix \(L_{3}\) that will take our matrix \(A^{(2)}\) to **upper-triangular****form**. So, let’s do that!

✅ **DO THIS:** Create a \(4 \times 4\) **unit lower-triangular** matrix, \(L_3\) that eliminates all of the subdiagonal entries of the third column of \(A^{(2)}\) yielding:

$\( \begin{align}A^{(3)} &= L_{3}A^{(2)} \\ &= L_{3}L_{2}A^{(1)} \\ &= L_{3}L_{2}L_{1}A \\ &= U \end{align} \)$.

Display the matrix \(L_3\) using SymPy.

```
## Type your answer here ##
L3 = np.matrix([[1,,,],[,1,,],[,,1,],[,,,1]]) # for your convenience
```

✅ **DO THIS:** Left-multiply \(A^{(2)}\) by \(L_3\) to confirm that all of the subdiagonal entries of column 3 of \(A^{(3)}\) are zero. Display the result via SymPy. **Note**: Your \(u_{ij}\) will be whole, or integer (\(\mathbb{Z}\)), numbers. You should now notice that \(A^{(3)} = U\) is in **row echelon form**, and, hence, \(U\) is an **upper-triangular matrix** with \(0\)s below the diagonal!

```
## Type your answer here ##
```

### Congratulations!¶

You have just decomposed your first matrix via the process below (and should now have a matrix, \(U\), that looks like the one below):

✅ **DO THIS:**

Finally, let’s explicitly generate the matrices \(L^{-1}\) and \(L\). Then, display them using SymPy.

It will be helpful to use the following:

and $\(\begin{align}L &= (L^{-1})^{-1} \\ &= (L_{n-1}L_{n-2}...L_{2}L_{1})^{-1} \\ &= L_{1}^{-1}L_{2}^{-1}...L_{n-2}^{-1}L_{n-1}^{-1} \end{align} \)$

If you’re stuck, refer to the paragraph at the beginning of this section for the explicit formula. Recall: \(L^{-1}L = LL^{-1} = I\).

```
## Type your answer here ##
```

✅ **DO THIS:** Look at all the matrices \(L_i\) and see the connections between the final \(L\).

```
print(L1)
print(L2)
print(L3)
print(L)
```

For our last bit of LU decomposition fun, let’s confirm that your matrices \(L\) and \(U\) fulfill the equality:

Indeed, there is a function in SymPy that will compute the LU decomposition for us.

✅ **DO THIS:** Run the following function and print its outputs:

Then, compute:

and confirm that it outputs the zero matrix.

```
## Type your answer here ##
```

### General LU Decomposition Algorithm¶

✅ **DO THIS:** Using the scaffolded code below, complete the LU decomposition algorithm. (It may be helpful to test your code on the matrix \(A\) from above.)

```
## Type your answer here ##
C = np.matrix([[2,1,1,0],[4,3,3,1],[8,7,9,5],[6,7,9,8]]) # to test
def LU_decomp(B):
n = len(B)
U = B.copy()
L = np.identity(n)
for k in np.arange(0,n-1):
for j in np.arange(k+1,n):
L[j,k] =
U[j,k:n] = U[,:] - L[,]*U[,:]
return np.mat(L), np.mat(U)
L1,U1 = LU_dec(C) # syntax for returning matrices
np.linalg.norm(L1*U1 - A) # Test: should return 0
```

### Solve \(Ax=b\) via LU Decomposition¶

You may wish to refer to the introduction of this assignment for a general overview of how to use LU Decomposition to solve \(Ax = b\).

✅ **DO THIS:** Using the scaffolded code below, complete the LU solver algorithm. The algorithm should solve \(Ly = b\) for \(y\) via Forward-Substitution and then \(Ux=y\) for \(x\) by Backward-Substitution. (It may be helpful to test your code on a matrix \(A\) and vector \(b\) from homework 1 or another source.)

```
## Type your answer here ##
def LU_Axb_solve(A,b):
L,U = LU_decomp(A)
n = len(A)
# Forward-Substitution: Ly = b for y
y = np.zeros((,))
for i in np.arange(0,n):
y[i] = b[i].copy()
for j in np.arange(0,i):
y[] = y[] - L[,]*y[]
# Backward-Substitution: Ux = y for x
x = np.zeros((n,1))
for i in np.arange(n-1,-1,-1):
x[] = y[].copy()
for j in np.arange(n-1,i,-1):
x[] = x[] - U[,]*x[]
x[] = x[]/U[,]
return np.mat(x)
```

Written by Dr. Dirk Colbry, Michigan State University

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.