- Page ID
- 96148
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)
\( \newcommand{\vectorC}[1]{\textbf{#1}}\)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}}\)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}\)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
View LU Decomposition on YouTube
The process of Gaussian elimination also results in the factoring of the matrix \(\text{A}\) to
\[\text{A}=\text{LU},\nonumber \]
where \(\text{L}\) is a lower triangular matrix and \(\text{U}\) is an upper triangular matrix. Using the same matrix \(\text{A}\) as in the last section, we show how this factorization is realized. We have
\[\left(\begin{array}{rrr}-3&2&-1\\6&-6&7\\3&-4&4\end{array}\right)\to\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\3&-4&4\end{array}\right)=\text{M}_1\text{A},\nonumber \]
where
\[\text{M}_1\text{A}=\left(\begin{array}{ccc}1&0&0\\2&1&0\\0&0&1\end{array}\right)\left(\begin{array}{rrr}-3&2&-1\\6&-6&7\\3&-4&4\end{array}\right)=\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\3&-4&4\end{array}\right).\nonumber \]
Note that the matrix \(\text{M}_1\) performs row elimination on the second row using the first row. Two times the first row is added to the second row.
The next step is
\[\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\3&-4&4\end{array}\right)\to\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&-2&3\end{array}\right)=\text{M}_2\text{M}_1\text{A},\nonumber \]
where
\[\text{M}_2\text{M}_1\text{A}=\left(\begin{array}{rrr}1&0&0\\0&1&0\\1&0&1\end{array}\right)\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\3&-4&4\end{array}\right)=\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&-2&3\end{array}\right).\nonumber \]
Note that the matrix \(\text{M}_2\) performs row elimination on the third row using the first row. One times the first row is added to the third row.
The last step is
\[\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&-2&3\end{array}\right)\to\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&0&-2\end{array}\right)=\text{M}_3\text{M}_2\text{M}_1\text{A},\nonumber \]
where
\[\text{M}_3\text{M}_2\text{M}_1\text{A}=\left(\begin{array}{rrr}1&0&0\\0&1&0\\0&-1&1\end{array}\right)\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&-2&3\end{array}\right)=\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&0&-2\end{array}\right).\nonumber \]
Here, \(\text{M}_3\) performs row elimination on the third row using the second row. Minus one times the second row is added to the third row. We now have
\[\text{M}_3\text{M}_2\text{M}_1\text{A}=\text{U}\nonumber \]
or
\[\text{A}=\text{M}_1^{-1}\text{M}_2^{-1}\text{M}_3^{-1}\text{U}.\nonumber \]
The inverse matrices are easy to find. The matrix \(\text{M}_1\) multiples the first row by \(2\) and adds it to the second row. To invert this operation, we simply need to multiply the first row by \(−2\) and add it to the second row, so that
\[\text{M}_1=\left(\begin{array}{ccc}1&0&0\\2&1&0\\0&0&1\end{array}\right),\quad\text{M}_1^{-1}=\left(\begin{array}{rrr}1&0&0\\-2&1&0\\0&0&1\end{array}\right).\nonumber \]
To check that
\[\text{M}_1\text{M}_1^{-1}=\text{I},\nonumber \]
we multiply
\[\left(\begin{array}{ccc}1&0&0\\2&1&0\\0&0&1\end{array}\right)\left(\begin{array}{rrr}1&0&0\\-2&1&0\\0&0&1\end{array}\right)=\left(\begin{array}{rrr}1&0&0\\0&1&0\\0&0&1\end{array}\right).\nonumber \]
Similarly,
\[\text{M}_2=\left(\begin{array}{rrr}1&0&0\\0&1&0\\1&0&1\end{array}\right),\quad\text{M}_2^{-1}=\left(\begin{array}{rrr}1&0&0\\0&1&0\\-1&0&1\end{array}\right),\nonumber \]
and
\[\text{M}_3=\left(\begin{array}{rrr}1&0&0\\0&1&0\\0&-1&1\end{array}\right),\quad\text{M}_3^{-1}=\left(\begin{array}{rrr}1&0&0\\0&1&0\\0&1&1\end{array}\right).\nonumber \]
Therefore,
\[\text{L}=\text{M}_1^{-1}\text{M}_2^{-1}\text{M}_3^{-1}\nonumber \]
is given by
\[\text{L}=\left(\begin{array}{rrr}1&0&0\\-2&1&0\\0&0&1\end{array}\right)\left(\begin{array}{rrr}1&0&0\\0&1&0\\-1&0&1\end{array}\right)\left(\begin{array}{rrr}1&0&0\\0&1&0\\0&1&1\end{array}\right)=\left(\begin{array}{rrr}1&0&0\\-1&2&0\\-1&1&1\end{array}\right),\nonumber \]
which is lower triangular. Notice that the off-diagonal elements of \(\text{M}_1^{−1}\), \(\text{M}_2^{−1}\), and \(\text{M}_3^{-1}\) are simply combined to form \(\text{L}\). Without actually multiplying matrices, one could obtain this result by considering how an elementary matrix performs row reduction on another elementary matrix. Our \(\text{LU}\) decomposition is therefore
\[\left(\begin{array}{rrr}-3&2&-1\\6&-6&7\\3&-4&4\end{array}\right)=\left(\begin{array}{rrr}1&0&0\\-2&1&0\\-1&1&1\end{array}\right)\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&0&-2\end{array}\right).\nonumber \]
Another nice feature of the \(\text{LU}\) decomposition is that it can be done by overwriting \(\text{A}\), therefore saving memory if the matrix \(\text{A}\) is very large.
The \(\text{LU}\) decomposition is useful when one needs to solve \(\text{Ax} = \text{b}\) for \(\text{x}\) when \(\text{A}\) is fixed and there are many different \(\text{b}\)’s. First one determines \(\text{L}\) and \(\text{U}\) using Gaussian elimination. Then one writes
\[(\text{LU})\text{x}=\text{L}(\text{Ux})=\text{b}.\nonumber \]
We let
\[\text{y}=\text{Ux},\nonumber \]
and first solve
\[\text{Ly}=\text{b}\nonumber \]
for \(\text{y}\) by forward substitution. We then solve
\[\text{Ux}=\text{y}\nonumber \]
for \(\text{x}\) by back substitution. If we count operations, we can show that solving \((\text{LU})\text{x} = \text{b}\) is a factor of \(n\) faster once \(\text{L}\) and \(\text{U}\) are in hand than solving \(\text{Ax} = \text{b}\) directly by Gaussian elimination.
We now illustrate the solution of \(\text{LUx} = \text{b}\) using our previous example, where
\[\text{L}=\left(\begin{array}{rrr}1&0&0\\-2&1&0\\-1&1&1\end{array}\right),\quad\text{U}=\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&0&-2\end{array}\right),\quad\text{b}=\left(\begin{array}{r}-1\\-7\\-6\end{array}\right).\nonumber \]
With \(\text{y} = \text{Ux}\), we first solve \(\text{Ly} = \text{b}\), that is
\[\left(\begin{array}{rrr}1&0&0\\-2&1&0\\-1&1&1\end{array}\right)\left(\begin{array}{c}y_1\\y_2\\y_3\end{array}\right)=\left(\begin{array}{c}-1\\-7\\-6\end{array}\right).\nonumber \]
Using forward substitution
\[\begin{aligned}y_1&=-1, \\ y_2&=-7+2y_1=-9, \\ y_3&=-6+y_1-y_2=2.\end{aligned} \nonumber \]
We now solve \(\text{Ux} = \text{y}\), that is
\[\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&0&-2\end{array}\right)\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)=\left(\begin{array}{r}-1\\-9\\2\end{array}\right).\nonumber \]
Using back substitution,
\[\begin{aligned}x_3&=-\frac{1}{2}(2)=-1, \\ x_2&=-\frac{1}{2}(-9-5x_3)=2, \\ x_1&=-\frac{1}{3}(-1-2x_2+x_3)=2,\end{aligned} \nonumber \]
and we have once again determined
\[\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)=\left(\begin{array}{r}2\\2\\-1\end{array}\right).\nonumber \]
When performing Gaussian elimination, recall that the diagonal element that one uses during the elimination procedure is called the pivot. To obtain the correct multiple, one uses the pivot as the divisor to the elements below the pivot. Gaussian elimination in this form will fail if the pivot is zero. In this case, a row interchange must be performed.
Even if the pivot is not identically zero, a small value can result in an unstable numerical computation. For large matrices solved by a computer, one can easily lose all accuracy in the solution. To avoid these round-off errors arising from small pivots, row interchanges are made, and the numerical technique is called partial pivoting. This method of \(\text{LU}\) decomposition with partial pivoting is the one usually taught in a standard numerical analysis course.