2.5: LU Decomposition (2024)

  • Page ID
    96148
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vectorC}[1]{\textbf{#1}}\)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}}\)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}\)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    View LU Decomposition on YouTube

    View Solving LUx=b on YouTube

    The process of Gaussian elimination also results in the factoring of the matrix \(\text{A}\) to

    \[\text{A}=\text{LU},\nonumber \]

    where \(\text{L}\) is a lower triangular matrix and \(\text{U}\) is an upper triangular matrix. Using the same matrix \(\text{A}\) as in the last section, we show how this factorization is realized. We have

    \[\left(\begin{array}{rrr}-3&2&-1\\6&-6&7\\3&-4&4\end{array}\right)\to\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\3&-4&4\end{array}\right)=\text{M}_1\text{A},\nonumber \]

    where

    \[\text{M}_1\text{A}=\left(\begin{array}{ccc}1&0&0\\2&1&0\\0&0&1\end{array}\right)\left(\begin{array}{rrr}-3&2&-1\\6&-6&7\\3&-4&4\end{array}\right)=\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\3&-4&4\end{array}\right).\nonumber \]

    Note that the matrix \(\text{M}_1\) performs row elimination on the second row using the first row. Two times the first row is added to the second row.

    The next step is

    \[\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\3&-4&4\end{array}\right)\to\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&-2&3\end{array}\right)=\text{M}_2\text{M}_1\text{A},\nonumber \]

    where

    \[\text{M}_2\text{M}_1\text{A}=\left(\begin{array}{rrr}1&0&0\\0&1&0\\1&0&1\end{array}\right)\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\3&-4&4\end{array}\right)=\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&-2&3\end{array}\right).\nonumber \]

    Note that the matrix \(\text{M}_2\) performs row elimination on the third row using the first row. One times the first row is added to the third row.

    The last step is

    \[\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&-2&3\end{array}\right)\to\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&0&-2\end{array}\right)=\text{M}_3\text{M}_2\text{M}_1\text{A},\nonumber \]

    where

    \[\text{M}_3\text{M}_2\text{M}_1\text{A}=\left(\begin{array}{rrr}1&0&0\\0&1&0\\0&-1&1\end{array}\right)\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&-2&3\end{array}\right)=\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&0&-2\end{array}\right).\nonumber \]

    Here, \(\text{M}_3\) performs row elimination on the third row using the second row. Minus one times the second row is added to the third row. We now have

    \[\text{M}_3\text{M}_2\text{M}_1\text{A}=\text{U}\nonumber \]

    or

    \[\text{A}=\text{M}_1^{-1}\text{M}_2^{-1}\text{M}_3^{-1}\text{U}.\nonumber \]

    The inverse matrices are easy to find. The matrix \(\text{M}_1\) multiples the first row by \(2\) and adds it to the second row. To invert this operation, we simply need to multiply the first row by \(−2\) and add it to the second row, so that

    \[\text{M}_1=\left(\begin{array}{ccc}1&0&0\\2&1&0\\0&0&1\end{array}\right),\quad\text{M}_1^{-1}=\left(\begin{array}{rrr}1&0&0\\-2&1&0\\0&0&1\end{array}\right).\nonumber \]

    To check that

    \[\text{M}_1\text{M}_1^{-1}=\text{I},\nonumber \]

    we multiply

    \[\left(\begin{array}{ccc}1&0&0\\2&1&0\\0&0&1\end{array}\right)\left(\begin{array}{rrr}1&0&0\\-2&1&0\\0&0&1\end{array}\right)=\left(\begin{array}{rrr}1&0&0\\0&1&0\\0&0&1\end{array}\right).\nonumber \]

    Similarly,

    \[\text{M}_2=\left(\begin{array}{rrr}1&0&0\\0&1&0\\1&0&1\end{array}\right),\quad\text{M}_2^{-1}=\left(\begin{array}{rrr}1&0&0\\0&1&0\\-1&0&1\end{array}\right),\nonumber \]

    and

    \[\text{M}_3=\left(\begin{array}{rrr}1&0&0\\0&1&0\\0&-1&1\end{array}\right),\quad\text{M}_3^{-1}=\left(\begin{array}{rrr}1&0&0\\0&1&0\\0&1&1\end{array}\right).\nonumber \]

    Therefore,

    \[\text{L}=\text{M}_1^{-1}\text{M}_2^{-1}\text{M}_3^{-1}\nonumber \]

    is given by

    \[\text{L}=\left(\begin{array}{rrr}1&0&0\\-2&1&0\\0&0&1\end{array}\right)\left(\begin{array}{rrr}1&0&0\\0&1&0\\-1&0&1\end{array}\right)\left(\begin{array}{rrr}1&0&0\\0&1&0\\0&1&1\end{array}\right)=\left(\begin{array}{rrr}1&0&0\\-1&2&0\\-1&1&1\end{array}\right),\nonumber \]

    which is lower triangular. Notice that the off-diagonal elements of \(\text{M}_1^{−1}\), \(\text{M}_2^{−1}\), and \(\text{M}_3^{-1}\) are simply combined to form \(\text{L}\). Without actually multiplying matrices, one could obtain this result by considering how an elementary matrix performs row reduction on another elementary matrix. Our \(\text{LU}\) decomposition is therefore

    \[\left(\begin{array}{rrr}-3&2&-1\\6&-6&7\\3&-4&4\end{array}\right)=\left(\begin{array}{rrr}1&0&0\\-2&1&0\\-1&1&1\end{array}\right)\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&0&-2\end{array}\right).\nonumber \]

    Another nice feature of the \(\text{LU}\) decomposition is that it can be done by overwriting \(\text{A}\), therefore saving memory if the matrix \(\text{A}\) is very large.

    The \(\text{LU}\) decomposition is useful when one needs to solve \(\text{Ax} = \text{b}\) for \(\text{x}\) when \(\text{A}\) is fixed and there are many different \(\text{b}\)’s. First one determines \(\text{L}\) and \(\text{U}\) using Gaussian elimination. Then one writes

    \[(\text{LU})\text{x}=\text{L}(\text{Ux})=\text{b}.\nonumber \]

    We let

    \[\text{y}=\text{Ux},\nonumber \]

    and first solve

    \[\text{Ly}=\text{b}\nonumber \]

    for \(\text{y}\) by forward substitution. We then solve

    \[\text{Ux}=\text{y}\nonumber \]

    for \(\text{x}\) by back substitution. If we count operations, we can show that solving \((\text{LU})\text{x} = \text{b}\) is a factor of \(n\) faster once \(\text{L}\) and \(\text{U}\) are in hand than solving \(\text{Ax} = \text{b}\) directly by Gaussian elimination.

    We now illustrate the solution of \(\text{LUx} = \text{b}\) using our previous example, where

    \[\text{L}=\left(\begin{array}{rrr}1&0&0\\-2&1&0\\-1&1&1\end{array}\right),\quad\text{U}=\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&0&-2\end{array}\right),\quad\text{b}=\left(\begin{array}{r}-1\\-7\\-6\end{array}\right).\nonumber \]

    With \(\text{y} = \text{Ux}\), we first solve \(\text{Ly} = \text{b}\), that is

    \[\left(\begin{array}{rrr}1&0&0\\-2&1&0\\-1&1&1\end{array}\right)\left(\begin{array}{c}y_1\\y_2\\y_3\end{array}\right)=\left(\begin{array}{c}-1\\-7\\-6\end{array}\right).\nonumber \]

    Using forward substitution

    \[\begin{aligned}y_1&=-1, \\ y_2&=-7+2y_1=-9, \\ y_3&=-6+y_1-y_2=2.\end{aligned} \nonumber \]

    We now solve \(\text{Ux} = \text{y}\), that is

    \[\left(\begin{array}{rrr}-3&2&-1\\0&-2&5\\0&0&-2\end{array}\right)\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)=\left(\begin{array}{r}-1\\-9\\2\end{array}\right).\nonumber \]

    Using back substitution,

    \[\begin{aligned}x_3&=-\frac{1}{2}(2)=-1, \\ x_2&=-\frac{1}{2}(-9-5x_3)=2, \\ x_1&=-\frac{1}{3}(-1-2x_2+x_3)=2,\end{aligned} \nonumber \]

    and we have once again determined

    \[\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)=\left(\begin{array}{r}2\\2\\-1\end{array}\right).\nonumber \]

    When performing Gaussian elimination, recall that the diagonal element that one uses during the elimination procedure is called the pivot. To obtain the correct multiple, one uses the pivot as the divisor to the elements below the pivot. Gaussian elimination in this form will fail if the pivot is zero. In this case, a row interchange must be performed.

    Even if the pivot is not identically zero, a small value can result in an unstable numerical computation. For large matrices solved by a computer, one can easily lose all accuracy in the solution. To avoid these round-off errors arising from small pivots, row interchanges are made, and the numerical technique is called partial pivoting. This method of \(\text{LU}\) decomposition with partial pivoting is the one usually taught in a standard numerical analysis course.

    2.5: LU Decomposition (2024)

    FAQs

    Is LU decomposition always possible? ›

    3. Do matrices always have an LU decomposition? No. Sometimes it is impossible to write a matrix in the form “lower triangular”דupper triangular”.

    How do you calculate LU decomposition? ›

    LU decomposition is calculated using Gaussian elimination, where you transform a square matrix A into lower (L) and upper (U) triangular matrices by performing row operations while keeping track of the changes in separate matrices. This process is iterative and continues until A is fully decomposed.

    What is the complexity of LU decomposition? ›

    For example, the complexity of finding an LU Decomposition of a dense matrix is O ( N 3 ) , which should be read as there being a constant where eventually the number of floating point operations required to decompose a matrix of size N × N grows cubically.

    Why is LU decomposition faster? ›

    LU decomposition stands for Lower Upper triangular decomposition. The triangular matrices speed up the process of back substitution in root finding.

    What is the sufficient condition for the LU decomposition method? ›

    Let A be a square matrix. If there is a lower triangular matrix L with all diagonal entries equal to 1 and an upper triangular matrix U such that A = LU, then we say that A has an LU-decomposition.

    Under what condition does LU decomposition not exist? ›

    The LU decomposition can fail when the top-left entry in the matrix A is zero or very small compared to other entries. Pivoting is a strategy to mitigate this problem by rearranging the rows and/or columns of A to put a larger element in the top-left position. There are many different pivoting algorithms.

    What is the flop count for LU decomposition? ›

    Later (after deriving the first version of LU factorization) it can be shown that the number of flops is f ≈ (2/3) n3. [The exact count is f = (1/6)*n*(4*n+1)*(n-1), if you want to be precise.] So the ratio of memory references to flops is r = 3/n → 0 as n → ∞.

    Is Cholesky decomposition faster than LU decomposition? ›

    Since we are only interested in real-valued matrices, we can replace the property of Hermitian with that of symmetric (i.e. the matrix equals its own transpose). Cholesky decomposition is approximately 2x faster than LU Decomposition, where it applies.

    What is the decay of Lu 177? ›

    Lutetium-177 (177Lu) is a medium-energy β-emitter (Emax of 498.3 keV) and low-energy gamma emitter (Eγ max of 208 keV), which decays with a half-life of approximately 6.6 days [74]. The first paper describing 177Lu as a potential diagnostic agent was published in 1968 for skeletal imaging purposes [75].

    What are the limitations of LU decomposition? ›

    There are some limitations and restrictions associated with LU decomposition: Square matrices: LU decomposition is applicable only to square matrices (i.e., matrices with an equal number of rows and columns). Existence of LU decomposition: Not all square matrices have an LU decomposition.

    Is LU decomposition numerically stable? ›

    LU factorization with partial pivoting

    It turns out that all square matrices can be factorized in this form, and the factorization is numerically stable in practice. This makes LUP decomposition a useful technique in practice.

    What is the determinant of LU decomposition? ›

    One application of LU decomposition in computing is in the computation of a determinant. The determinant is often computed by taking the product of the elements on the diagonal of both the L and U matrices. Since LU decomposition is quite efficient, this is a computationally efficient way of computing the determinant.

    How to check if LU decomposition exists? ›

    A square matrix is said to have an LU decomposition (or LU factorization) if it can be written as the product of a lower triangular (L) and an upper triangular (U) matrix.

    When can you not do LU factorization? ›

    no LU factorization if the first (n−1) columns are non-zero and linearly independent and at least one leading principal minor is zero.

    What are the disadvantages of LU decomposition? ›

    Numerical stability: LU decomposition can be numerically unstable for ill-conditioned matrices (matrices with a large condition number). In such cases, small errors in the input data can lead to large errors in the output.

    Do little LU decomposition? ›

    Doolittle Algorithm: The Doolittle Algorithm is a method for performing LU Decomposition, where a given matrix is decomposed into a lower triangular matrix L and an upper triangular matrix U.

    Top Articles
    Latest Posts
    Recommended Articles
    Article information

    Author: Greg O'Connell

    Last Updated:

    Views: 5829

    Rating: 4.1 / 5 (62 voted)

    Reviews: 93% of readers found this page helpful

    Author information

    Name: Greg O'Connell

    Birthday: 1992-01-10

    Address: Suite 517 2436 Jefferey Pass, Shanitaside, UT 27519

    Phone: +2614651609714

    Job: Education Developer

    Hobby: Cooking, Gambling, Pottery, Shooting, Baseball, Singing, Snowboarding

    Introduction: My name is Greg O'Connell, I am a delightful, colorful, talented, kind, lively, modern, tender person who loves writing and wants to share my knowledge and understanding with you.