7: LU Decomposition Method for Solving Simultaneous Linear Equations (2025)

  • Page ID
    104145

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vectorC}[1]{\textbf{#1}}\)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}}\)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}\)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Learning Objectives

    After successful completion of this section, you should be able to

    1. solve a set of simultaneous linear equations using LU decomposition method
    2. decompose a nonsingular matrix into LU form.
    3. solve a set of simultaneous linear equations using LU decomposition method
    4. decompose a nonsingular matrix into LU form.
    5. find the inverse of a matrix using LU decomposition method.
    6. justify why using LU decomposition method is more efficient than Gaussian elimination in some cases.

    I hear about LU decomposition used as a method to solve a set of simultaneous linear equations. What is it?

    We already studied two numerical methods of finding the solution to simultaneous linear equations – Naïve Gauss elimination and Gaussian elimination with partial pivoting. Then, why do we need to learn another method? To appreciate why LU decomposition could be a better choice than the Gauss elimination techniques in some cases, let us discuss first what LU decomposition is about.

    For a nonsingular matrix \(\left\lbrack A \right\rbrack\) on which one can successfully conduct the Naïve Gauss elimination forward elimination steps, one can always write it as

    \[\left\lbrack A \right\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack \nonumber \]

    where

    \[\left\lbrack L \right\rbrack = \text{Lower triangular matrix} \nonumber \]

    \[\left\lbrack U \right\rbrack = \text{Upper triangular matrix} \nonumber \]

    Then if one is solving a set of equations

    \[\left\lbrack A \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack, \nonumber \]

    then

    \[\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack\ \text{as }\left( \lbrack A\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack \right) \nonumber \]

    Multiplying both sides by \(\left\lbrack L \right\rbrack^{- 1}\),

    \[\left\lbrack L \right\rbrack^{- 1}\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack L \right\rbrack^{- 1}\left\lbrack C \right\rbrack \nonumber \]

    \[\left\lbrack I \right\rbrack \left\lbrack U \right\rbrack \left\lbrack X \right\rbrack = \left\lbrack L \right\rbrack^{- 1}\left\lbrack C \right\rbrack\ \text{as }\left( \left\lbrack L \right\rbrack^{- 1}\left\lbrack L \right\rbrack = \lbrack I\rbrack \right) \nonumber \]

    \[\left\lbrack U \right\rbrack \left\lbrack X \right\rbrack = \left\lbrack L \right\rbrack^{- 1}\left\lbrack C \right\rbrack\ \text{as }\left( \left\lbrack I \right\rbrack\ \left\lbrack U \right\rbrack = \lbrack U\rbrack \right) \nonumber \]

    Let

    \[\left\lbrack L \right\rbrack^{- 1}\left\lbrack C \right\rbrack = \left\lbrack Z \right\rbrack \nonumber \]

    then

    \[\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack\ \ \ (1) \nonumber \]

    and

    \[\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack\ \ \ (2) \nonumber \]

    So we can solve Equation (1) first for \(\lbrack Z\rbrack\) by using forward substitution and then use Equation (2) to calculate the solution vector \(\left\lbrack X \right\rbrack\) by back substitution.

    How do I decompose a non-singular matrix [A], that is, how do I find [A] = [L][U]?

    If forward elimination steps of the Naïve Gauss elimination methods can be applied on a nonsingular matrix, then \(\left\lbrack A \right\rbrack\) can be decomposed into LU as

    \[\begin{split} \lbrack A\rbrack &= \begin{bmatrix} a_{11} & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \cdots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \\ \end{bmatrix}\\ &= \begin{bmatrix} 1 & 0 & \ldots & 0 \\ {l}_{21} & 1 & \cdots & 0 \\ \vdots & \vdots & \cdots & \vdots \\ {l}_{n1} & {l}_{n2} & \cdots & 1 \\ \end{bmatrix}\ \ \begin{bmatrix} u_{11} & u_{12} & \ldots & u_{1n} \\ 0 & u_{22} & \cdots & u_{2n} \\ \vdots & \vdots & \cdots & \vdots \\ 0 & 0 & \cdots & u_{nn} \\ \end{bmatrix} \end{split} \nonumber \]

    The elements of the \(\left\lbrack U \right\rbrack\) matrix are exactly the same as the coefficient matrix one obtains at the end of the forward elimination steps in Naïve Gauss elimination.

    The lower triangular matrix \(\left\lbrack L \right\rbrack\) has \(1\) in its diagonal entries. The non-zero elements on the non-diagonal elements in \(\left\lbrack L \right\rbrack\) are multipliers that made the corresponding entries zero in the upper triangular matrix \(\left\lbrack U\right\rbrack\) during forward elimination.

    Let us look at this using the same example as used in Naïve Gaussian elimination.

    Example \(\PageIndex{1}\)

    Find the LU decomposition of the matrix

    \[\left\lbrack A \right\rbrack = \begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix} \nonumber \]

    Solution

    \[\begin{split} \left\lbrack A \right\rbrack &= \left\lbrack L \right\rbrack \left\lbrack U \right\rbrack\\ &= \begin{bmatrix} 1 & 0 & 0 \\ {l}_{21} & 1 & 0 \\ {l}_{31} & {l}_{32} & 1 \\ \end{bmatrix}\begin{bmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \\ \end{bmatrix} \end{split} \nonumber \]

    The \(\left\lbrack U \right\rbrack\) matrix is the same as found at the end of the forward elimination of Naïve Gauss elimination method, that is

    \[\left\lbrack U \right\rbrack = \begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix} \nonumber \]

    To find \({l}_{21}\) and \({l}_{31}\), find the multiplier that was used to make the \(a_{21}\) and \(a_{31}\) elements zero in the first step of forward elimination of the Naïve Gauss elimination method. It was

    \[\begin{split} {l}_{21} &= \frac{64}{25}\\ &= 2.56 \end{split} \nonumber \]

    \[\begin{split} {l}_{31} &= \frac{144}{25}\\ &= 5.76 \end{split} \nonumber \]

    To find \({l}_{32}\), what multiplier was used to make \(a_{32}\) element zero? Remember \(a_{32}\) element was made zero in the second step of forward elimination. The \(\left\lbrack A \right\rbrack\) matrix at the beginning of the second step of forward elimination was

    \[\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & - 16.8 & - 4.76 \\ \end{bmatrix} \nonumber \]

    So

    \[\begin{split} {l}_{32} &= \frac{- 16.8}{- 4.8}\\ &= 3.5 \end{split} \nonumber \]

    Hence

    \[\left\lbrack L \right\rbrack = \begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix} \nonumber \]

    Confirm \(\left\lbrack L \right\rbrack \left\lbrack U \right\rbrack = \left\lbrack A \right\rbrack\).

    \[\begin{split} \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack &= \begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix}\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix}\\ &= \begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix} \end{split} \nonumber \]

    Example \(\PageIndex{2}\)

    Use the LU decomposition method to solve the following simultaneous linear equations.

    \[\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} a_{1} \\ a_{2} \\ a_{3} \\ \end{bmatrix} = \begin{bmatrix} 106.8 \\ 177.2 \\ 279.2 \\ \end{bmatrix} \nonumber \]

    Solution

    Recall that

    \[\left\lbrack A \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack \nonumber \]

    and if

    \[\left\lbrack A \right\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack \nonumber \]

    then first solving

    \[\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack \nonumber \]

    and then

    \[\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack \nonumber \]

    gives the solution vector \(\left\lbrack X \right\rbrack\).

    Now in the previous example, we showed

    \[\begin{split} \left\lbrack A \right\rbrack &= \left\lbrack L \right\rbrack \left\lbrack U \right\rbrack\\ &=\begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix}\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix} \end{split} \nonumber \]

    First solve

    \[\left\lbrack L \right\rbrack\ \left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack \nonumber \]

    \[\begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix}\begin{bmatrix} z_{1} \\ z_{2} \\ z_{3} \\ \end{bmatrix} = \begin{bmatrix} 106.8 \\ 177.2 \\ 279.2 \\ \end{bmatrix} \nonumber \]

    to give

    \[z_{1} = 106.8 \nonumber \]

    \[2.56z_{1} + z_{2} = 177.2 \nonumber \]

    \[5.76z_{1} + 3.5z_{2} + z_{3} = 279.2 \nonumber \]

    Forward substitution starting from the first equation gives

    \[z_{1} = 106.8 \nonumber \]

    \[\begin{split} z_{2} &= 177.2 - 2.56z_{1}\\ &= 177.2 - 2.56 \times 106.8\\ &= - 96.208 \end{split} \nonumber \]

    \[\begin{split} z_{3} &= 279.2 - 5.76z_{1} - 3.5z_{2}\\ &= 279.2 - 5.76 \times 106.8 - 3.5 \times \left( - 96.208 \right)\\ &= 0.76 \end{split} \nonumber \]

    Hence

    \[\begin{split} \left\lbrack Z \right\rbrack &= \begin{bmatrix} z_{1} \\ z_{2} \\ z_{3} \\ \end{bmatrix}\\ &= \begin{bmatrix} 106.8 \\ - 96.208 \\ 0.76 \\ \end{bmatrix} \end{split} \nonumber \]

    This matrix is same as the right-hand side obtained at the end of the forward elimination steps of Naïve Gauss elimination method. Is this a coincidence?

    Now solve

    \[\left\lbrack U \right\rbrack \left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack \nonumber \]

    \[\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix}\begin{bmatrix} a_{1} \\ a_{2} \\ a_{3} \\ \end{bmatrix} = \begin{bmatrix} 106.8 \\ - 96.208 \\ 0.76 \\ \end{bmatrix} \nonumber \]

    \[25a_{1} + 5a_{2} + a_{3} = 106.8 \nonumber \]

    \[- 4.8a_{2} - 1.56a_{3} = - 96.208 \nonumber \]

    \[0.7a_{3} = 0.76 \nonumber \]

    From the third equation

    \[0.7a_{3} = 0.76 \nonumber \]

    \[\begin{split} a_{3} &= \frac{0.76}{0.7}\\ &= 1.0857\end{split} \nonumber \]

    Substituting the value of \(a_{3}\) in the second equation,

    \[- 4.8a_{2} - 1.56a_{3} = - 96.208 \nonumber \]

    \[\begin{split} a_{2} &= \frac{- 96.208 + 1.56a_{3}}{- 4.8}\\ &= \frac{- 96.208 + 1.56 \times 1.0857}{- 4.8}\\ &= 19.691 \end{split} \nonumber \]

    Substituting the value of \(a_{2}\) and \(a_{3}\) in the first equation,

    \[25a_{1} + 5a_{2} + a_{3} = 106.8 \nonumber \]

    \[\begin{split} a_{1} &= \frac{106.8 - 5a_{2} - a_{3}}{25}\\ &= \frac{106.8 - 5 \times 19.691 - 1.0857}{25}\\ &= 0.29048 \end{split} \nonumber \]

    Hence the solution vector is

    \[\begin{bmatrix} a_{1} \\ a_{2} \\ a_{3} \\ \end{bmatrix} = \begin{bmatrix} 0.29048 \\ 19.691 \\ 1.0857 \\ \end{bmatrix} \nonumber \]

    How do I find the inverse of a square matrix using LU decomposition?

    A matrix \(\left\lbrack B \right\rbrack\) is the inverse of \(\left\lbrack A \right\rbrack\) if

    \[\left\lbrack A \right\rbrack\left\lbrack B \right\rbrack = \left\lbrack I \right\rbrack = \left\lbrack B \right\rbrack\left\lbrack A \right\rbrack \nonumber \]

    How can we use LU decomposition to find the inverse of the matrix? Assume the first column of \(\left\lbrack B \right\rbrack\) (the inverse of \(\left\lbrack A \right\rbrack\)) is

    \[\lbrack b_{11}\ b_{12}\ldots\ \ldots b_{n1}\rbrack^{T} \nonumber \]

    Then from the above definition of an inverse and the definition of matrix multiplication

    \[\left\lbrack A \right\rbrack\begin{bmatrix} b_{11} \\ b_{21} \\ \vdots \\ b_{n1} \\ \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ \vdots \\ 0 \\ \end{bmatrix} \nonumber \]

    Similarly, the second column of \(\left\lbrack B \right\rbrack\) is given by

    \[\left\lbrack A \right\rbrack\begin{bmatrix} b_{12} \\ b_{22} \\ \vdots \\ b_{n2} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \\ \vdots \\ 0 \\ \end{bmatrix} \nonumber \]

    Similarly, all columns of \(\left\lbrack B \right\rbrack\) can be found by solving \(n\) different sets of equations with the column of the right-hand side being the \(n\) columns of the identity matrix.

    Example \(\PageIndex{3}\)

    Use LU decomposition to find the inverse of

    \[\left\lbrack A \right\rbrack = \begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix} \nonumber \]

    Solution

    Knowing that

    \[\begin{split} \left\lbrack A \right\rbrack &= \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\\ &= \begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix}\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix} \end{split} \nonumber \]

    We can solve for the first column of \(\lbrack B\rbrack = \left\lbrack A \right\rbrack^{- 1}\)by solving for

    \[\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} b_{11} \\ b_{21} \\ b_{31} \\ \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} \nonumber \]

    First solve

    \[\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack, \nonumber \]

    that is

    \[\begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix}\begin{bmatrix} z_{1} \\ z_{2} \\ z_{3} \\ \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} \nonumber \]

    to give

    \[z_{1} = 1 \nonumber \]

    \[2.56z_{1} + z_{2} = 0 \nonumber \]

    \[5.76z_{1} + 3.5z_{2} + z_{3} = 0 \nonumber \]

    Forward substitution starting from the first equation gives

    \[z_{1} = 1 \nonumber \]

    \[\begin{split} z_{2} &= 0 - 2.56z_{1}\\ &= 0 - 2.56\left( 1 \right)\\ &= - 2.56 \end{split} \nonumber \]

    \[\begin{split} z_{3} &= 0 - 5.76z_{1} - 3.5z_{2}\\ &= 0 - 5.76\left( 1 \right) - 3.5\left( - 2.56 \right)\\ &= 3.2 \end{split} \nonumber \]

    Hence

    \[\begin{split} \left\lbrack Z \right\rbrack &= \begin{bmatrix} z_{1} \\ z_{2} \\ z_{3} \\ \end{bmatrix}\\ &= \begin{bmatrix} 1 \\ - 2.56 \\ 3.2 \\ \end{bmatrix} \end{split} \nonumber \]

    Now solve

    \[\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack \nonumber \]

    that is

    \[\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix}\begin{bmatrix} b_{11} \\ b_{21} \\ b_{31} \\ \end{bmatrix} = \begin{bmatrix} 1 \\ - 2.56 \\ 3.2 \\ \end{bmatrix} \nonumber \]

    \[25b_{11} + 5b_{21} + b_{31} = 1 \nonumber \]

    \[- 4.8b_{21} - 1.56b_{31} = - 2.56 \nonumber \]

    \[0.7b_{31} = 3.2 \nonumber \]

    Backward substitution starting from the third equation gives

    \[\begin{split} b_{31} &= \frac{3.2}{0.7}\\ &= 4.571 \end{split} \nonumber \]

    \[\begin{split} b_{21} &= \frac{- 2.56 + 1.56b_{31}}{- 4.8}\\ &= \frac{- 2.56 + 1.56(4.571)}{- 4.8}\\ &= - 0.9524 \end{split} \nonumber \]

    \[\begin{split} b_{11} &= \frac{1 - 5b_{21} - b_{31}}{25}\\ &= \frac{1 - 5( - 0.9524) - 4.571}{25}\\ &= 0.04762 \end{split} \nonumber \]

    Hence the first column of the inverse of \(\left\lbrack A \right\rbrack\) is

    \[\begin{bmatrix} b_{11} \\ b_{21} \\ b_{31} \\ \end{bmatrix} = \begin{bmatrix} 0.04762 \\ - 0.9524 \\ 4.571 \\ \end{bmatrix} \nonumber \]

    Similarly, solving

    \[\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} b_{12} \\ b_{22} \\ b_{32} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \\ 0 \\ \end{bmatrix}\ \text{gives }\begin{bmatrix} b_{12} \\ b_{22} \\ b_{32} \\ \end{bmatrix} = \begin{bmatrix} - 0.08333 \\ 1.417 \\ - 5.000 \\ \end{bmatrix} \nonumber \]

    and solving

    \[\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} b_{13} \\ b_{23} \\ b_{33} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 1 \\ \end{bmatrix}\ \text{gives }\begin{bmatrix} b_{13} \\ b_{23} \\ b_{33} \\ \end{bmatrix} = \begin{bmatrix} 0.03571 \\ - 0.4643 \\ 1.429 \\ \end{bmatrix} \nonumber \]

    Hence

    \[\left\lbrack A \right\rbrack^{- 1} = \begin{bmatrix} 0.04762 & - 0.08333 & 0.03571 \\ - 0.9524 & 1.417 & - 0.4643 \\ 4.571 & - 5.000 & 1.429 \\ \end{bmatrix} \nonumber \]

    Can you confirm the following for the above example?

    \[\left\lbrack A \right\rbrack\ \left\lbrack A \right\rbrack^{- 1} = \left\lbrack I \right\rbrack = \left\lbrack A \right\rbrack^{- 1}\left\lbrack A \right\rbrack \nonumber \]

    LU decomposition looks more complicated than Gaussian elimination. Do we use LU decomposition because it is computationally more efficient than Gaussian elimination to solve a set of n equations given by \(\mathbf{[A][X]=[C]}\)?

    For a square matrix \(\lbrack A\rbrack\) of \(n \times n\) size, the computational time[^1] \(CT|_{DE}\) to decompose the \(\lbrack A\rbrack\) matrix to \(\lbrack L\rbrack\lbrack U\rbrack\) form is given by

    \[CT|_{DE} = T\left( \frac{8n^{3}}{3} + 4n^{2} - \frac{20n}{3} \right), \nonumber \]

    where

    \[T = \text{clock cycle time}^{2} \nonumber \]

    The computational time \(CT|_{FS}\) to solve by forward substitution \(\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack\) is given by

    \[CT|_{FS} = T\left( 4n^{2} - 4n \right) \nonumber \]

    The computational time \(CT|_{BS}\) to solve by back substitution \(\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack\) is given by

    \[CT|_{BS} = T\left( 4n^{2} + 12n \right) \nonumber \]

    So, the total computational time to solve a set of equations by LU decomposition is

    \[\begin{split} {CT|}_{LU} &= {CT|}_{DE} + {CT|}_{FS} + {CT|}_{BS}\\ &= T\left( \frac{8n^{3}}{3} + 4n^{2} - \frac{20n}{3} \right) + T\left( 4n^{2} - 4n \right) + T\left( 4n^{2} + 12n \right)\\ &= T\left( \frac{8n^{3}}{3} + 12n^{2} + \frac{4n}{3} \right) \end{split} \nonumber \]

    Now let us look at the computational time taken by Gaussian elimination. The computational time \(CT|_{FE}\) for the forward elimination part,

    \[{CT|}_{FE} = T\left( \frac{8n^{3}}{3} + 8n^{2} - \frac{32n}{3} \right), \nonumber \]

    and the computational time \(CT|_{BS}\) for the back substitution part is

    \[{CT|}_{BS} = T\left( 4n^{2} + 12n \right) \nonumber \]

    So, the total computational time \(CT|_{GE}\) to solve a set of equations by Gaussian Elimination is

    \[\begin{split} {CT|}_{GE} &= {CT|}_{FE} + {CT|}_{BS}\\ &= T\left( \frac{8n^{3}}{3} + 8n^{2} - \frac{32n}{3} \right) + T\left( 4n^{2} + 12n \right)\\ &= T\left( \frac{8n^{3}}{3} + 12n^{2} + \frac{4n}{3} \right) \end{split} \nonumber \]

    The computational time for Gaussian elimination and LU decomposition is identical.

    This has confused me further! Why should I learn LU decomposition method when it takes the same computational time as Gaussian elimination, and that too when the two methods are closely related? Please convince me that LU decomposition has its place in solving linear equations!

    We now have the knowledge to convince you that LU decomposition method has its place in the solution of simultaneous linear equations. Let us look at an example where the LU decomposition method is computationally more efficient than Gaussian elimination. Remember in trying to find the inverse of the matrix \(\lbrack A\rbrack\) in Chapter 04.01, the problem reduces to solving \(n\) sets of equations with the \(n\) columns of the identity matrix as the RHS vector. For calculations of each column of the inverse of the \(\lbrack A\rbrack\) matrix, the coefficient matrix \(\lbrack A\rbrack\) matrix in the set of equation \(\left\lbrack A \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack\) does not change. So, if we use the LU decomposition method, the \(\left\lbrack A \right\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) decomposition needs to be done only once, the forward substitution (Equation 1) \(n\) times, and the back substitution (Equation 2) \(n\) times.

    Therefore, the total computational time \(CT|_{inverse\ {LU}}\) required to find the inverse of a matrix using LU decomposition is

    \[\begin{split} {CT|}_{inverse\ {LU}} &= 1 \times {CT|}_{DE} + n \times {CT|}_{FS} + n \times {CT|}_{BS}\\ &= 1 \times T\left( \frac{8n^{3}}{3} + 4n^{2} - \frac{20n}{3} \right) + n \times T\left( 4n^{2} - 4n \right) + n \times T\left( 4n^{2} + 12n \right)\\ &= T\left( \frac{32n^{3}}{3} + 12n^{2} - \frac{20n}{3} \right) \end{split} \nonumber \]

    In comparison, if Gaussian elimination method were used to find the inverse of a matrix, the forward elimination as well as the back substitution will have to be done n times. The total computational time \({CT|}_{inverse\ {GE}}\) required to find the inverse of a matrix by using Gaussian elimination then is

    \[\begin{split} {CT|}_{inverse\ {GE}} &= n \times {CT|}_{FE} + n \times {CT|}_{BS}\\ &= n \times T\left( \frac{8n^{3}}{3} + 8n^{2} - \frac{32n}{3} \right) + n \times T\left( 4n^{2} + 12n \right)\\ &= T\left( \frac{8n^{4}}{3} + 12n^{3} + \frac{4n^{2}}{3} \right) \end{split} \nonumber \]

    Clearly for large \(n\), \({CT|}_{inverse\ {GE}} >>{CT|}_{inverse\ {LU}}\) as \({CT|}_{inverse\ {GE}}\) has the dominating terms of \(n^{4}\) and \({CT|}_{inverse\ {LU}}\)has the dominating terms of \(n^{3}\). For large values of \(n\), Gaussian elimination method would take more computational time (approximately \(n/4\) times – prove it) than the LU decomposition method. Typical values of the ratio of the computational time for different values of \(n\) are given in Table 1.

    Table 1. Comparing computational times of finding inverse of a matrix using LU decomposition and Gaussian elimination.
    \(n\) \(10\) \(100\) \(1000\) \(10000\)
    \({CT|}_{inverse\ {GE}}/{CT|}_{inverse\ {LU}}\) \(3.28\) \(25.83\) \(250.8\) \(2501\)

    Are you convinced now that LU decomposition has its place in solving systems of equations when we have to solve for multiple right-hand side vectors while the coefficient matrix stays unchanged?

    The time is calculated by first separately calculating the number of additions, subtractions, multiplications, and divisions in a procedure such as back substitution, etc. We then assume 4 clock cycles each for an add, subtract, or multiply operation, and 16 clock cycles for a divide operation as is the case for a typical AMD®-K7 chip.
    http://www.isi.edu/~draper/papers/mwscas07_kwon.pdf

    LU Decomposition Method for Solving Simultaneous Linear Equations Quiz

    Quiz 1

    The \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) decomposition method is computationally more efficient than Naïve Gauss elimination for solving

    (A). a single set of simultaneous linear equations.

    (B). multiple sets of simultaneous linear equations with different coefficient matrices and the same right-hand side vectors.

    (C). multiple sets of simultaneous linear equations with the same coefficient matrix and different right-hand side vectors.

    (D). less than ten simultaneous linear equations.

    Quiz 2

    The lower triangular matrix \(\left\lbrack L \right\rbrack\) in the \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) decomposition of the matrix given below

    \[\begin{bmatrix} 25 & 5 & 4 \\ 10 & 8 & 16 \\ 8 & 12 & 22 \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ \mathcal{l}_{21} & 1 & 0 \\ \mathcal{l}_{31} & \mathcal{l}_{32} & 1 \\ \end{bmatrix}\begin{bmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \\ \end{bmatrix} \nonumber \] is

    (A) \(\begin{bmatrix} 1 & 0 & 0 \\ 0.40000 & 1 & 0 \\ 0.32000 & 1.7333 & 1 \\ \end{bmatrix}\)

    (B) \(\begin{bmatrix} 25 & 5 & 4 \\ 0 & 6 & 14.400 \\ 0 & 0 & - 4.2400 \\ \end{bmatrix}\)

    (C) \(\begin{bmatrix} 1 & 0 & 0 \\ 10 & 1 & 0 \\ 8 & 12 & 0 \\ \end{bmatrix}\)

    (D) \(\begin{bmatrix} 1 & 0 & 0 \\ 0.40000 & 1 & 0 \\ 0.32000 & 1.5000 & 1 \\ \end{bmatrix}\)

    Quiz 3

    The upper triangular matrix \(\left\lbrack U \right\rbrack\) in the \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) decomposition of the matrix given below

    \[\begin{bmatrix} 25 & 5 & 4 \\ 0 & 8 & 16 \\ 0 & 12 & 22 \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ \mathcal{l}_{21} & 1 & 0 \\ \mathcal{l}_{31} & \mathcal{l}_{32} & 1 \\ \end{bmatrix}\begin{bmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \\ \end{bmatrix} \nonumber \]

    is

    (A) \(\begin{bmatrix} 1 & 0 & 0 \\ 0.40000 & 1 & 0 \\ 0.32000 & 1.7333 & 1 \\ \end{bmatrix}\)

    (B) \(\begin{bmatrix} 25 & 5 & 4 \\ 0 & 6 & 14.400 \\ 0 & 0 & - 4.2400 \\ \end{bmatrix}\)

    (C) \(\begin{bmatrix} 25 & 5 & 4 \\ 0 & 8 & 16 \\ 0 & 0 & - 2 \\ \end{bmatrix}\)

    (D) \(\begin{bmatrix} 1 & 0.2000 & 0.16000 \\ 0 & 1 & 2.4000 \\ 0 & 0 & - 4.240 \\ > \end{bmatrix}\)

    Quiz 4

    For a given 2000 \(\times\) 2000 matrix \(\left\lbrack A \right\rbrack\), assume that it takes about 15 seconds to find the inverse of \(\left\lbrack A \right\rbrack\) by the use of the \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) decomposition method, that is, finding the \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) once, and then doing forward substitution and back substitution \(2000\) times using the \(2000\) columns of the identity matrix as the right hand side vector. The approximate time, in seconds, that it will take to find the inverse if found by repeated use of the Naive Gauss elimination method, that is, doing forward elimination and back substitution \(2000\) times by using the \(2000\) columns of the identity matrix as the right hand side vector is most nearly

    (A) \(300\)

    (B) \(1500\)

    (C) \(7500\)

    (D) \(30000\)

    Quiz 5

    The algorithm for solving a set of \(n\) equations \(\left\lbrack A \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack\), where \(\left\lbrack A \right\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) involves solving\(\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack\) by forward substitution. The algorithm to solve \(\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack\) is given by

    (A) \(z_{1} = c_{1}/l_{11}\)

    for \(i\) from 2 to \(n\) do

    sum = 0

    for \(j\) from 1 to \(i\) do

    sum = sum + \(l_{\text{ij}}*z_{j}\)

    end do

    \(z_{i} = (c_{i} - \text{sum})/l_{\text{ii}}\)

    end do

    (B) \(z_{1} = c_{1}/l_{11}\)

    for \(i\) from 2 to \(n\) do

    sum = 0

    for \(j\) from 1 to \((i - 1)\) do

    sum = sum + \(l_{\text{ij}}*z_{j}\)

    end do

    \(z_{i} = (c_{i} - \text{sum})/l_{\text{ii}}\)

    end do

    (C) \(z_{1} = c_{1}/l_{11}\)

    for \(i\) from 2 to \(n\) do

    for \(j\) from 1 to \((i - 1)\)do

    sum = sum + \(l_{\text{ij}}*z_{j}\)

    end do

    \(z_{i} = (c_{i} - \text{sum})/l_{\text{ii}}\)

    end do

    (D) for \(i\) from 2 to \(n\) do

    sum = 0

    for \(j\) from 1 to \((i - 1)\) do

    sum = sum +\(l_{\text{ij}}*z_{j}\)

    end do

    \(z_{i} = (c_{i} - \text{sum})/l_{\text{ii}}\)

    end do

    Quiz 6

    To solve boundary value problems, a numerical method based on finite difference method is used. This results in simultaneous linear equations with tridiagonal coefficient matrices. These are solved using a specialized \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) decomposition method.

    Choose the set of equations that approximately solves the boundary value problem

    \[\frac{d^{2}y}{dx^{2}} = 6x - 0.5x^{2},\ \ y\left( 0 \right) = 0,\ \ y\left( 12 \right) = 0,\ \ 0 \leq x \leq 12 \nonumber \]

    The second derivative in the above equation is approximated by the second-order accurate central divided difference approximation as learned in the differentiation module (Chapter 02.02). A step size of \(h = 4\) is used, and hence the value of y can be found approximately at equidistantly placed 4 nodes between \(x = 0\) and \(x = 12\).

    (A) \(\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0.0625 & 0.125 & 0.0625 & 0 \\ 0 & 0.0625 & 0.125 & 0.0625 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix} y_{1} \\ y_{2} \\ y_{3} \\ y_{4} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 16.0 \\ 16.0 \\ 0 \\ \end{bmatrix}\)

    (B) \(\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0.0625 & - 0.125 & 0.0625 & 0 \\ 0 & 0.0625 & - 0.125 & 0.0625 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix} y_{1} \\ y_{2} \\ y_{3} \\ y_{4} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 16.0 \\ 16.0 \\ 0 \\ \end{bmatrix}\)

    (C) \(\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0.0625 & - 0.125 & 0.0625 & 0 \\ 0 & 0.0625 & - 0.125 & 0.0625 \\ \end{bmatrix}\begin{bmatrix} y_{1} \\ y_{2} \\ y_{3} \\ y_{4} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 16.0 \\ 16.0 \\ 0 \\ \end{bmatrix}\)

    (D) \(\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0.0625 & 0.125 & 0.0625 & 0 \\ 0 & 0.0625 & 0.125 & 0.0625 \\ \end{bmatrix}\begin{bmatrix} y_{1} \\ y_{2} \\ y_{3} \\ y_{4} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 16.0 \\ 16.0 \\ \end{bmatrix}\)

    LU Decomposition Method for Solving Simultaneous Linear Equations Exercise

    Exercise 1

    Show that LU decomposition is computationally a more efficient way of finding the inverse of a square matrix than using Gaussian elimination.

    Exercise 2

    Use LU decomposition to find [L] and [U]

    \[4x_{1} + x_{2} - x_{3} = - 2 \nonumber \] \[5x_{1} + x_{2} + 2x_{3} = 4 \nonumber \] \[6x_{1} + x_{2} + x_{3} = 6 \nonumber \]

    Exercise 3

    Find the inverse

    \[\lbrack A\rbrack = \begin{bmatrix} 3 & 4 & 1 \\ 2 & - 7 & - 1 \\ 8 & 1 & 5 \\ \end{bmatrix} \nonumber \]

    using LU decomposition.

    Exercise 4

    Fill in the blanks for the unknowns in the LU decomposition of the matrix given below

    \[\begin{bmatrix} 25 & 5 & 4 \\ 75 & 7 & 16 \\ 12.5 & 12 & 22 \\ \end{bmatrix} = \begin{bmatrix} \mathcal{l}_{11} & 0 & 0 \\ \mathcal{l}_{21} & \mathcal{l}_{22} & 0 \\ \mathcal{l}_{31} & \mathcal{l}_{32} & \mathcal{l}_{33} \\ \end{bmatrix}\begin{bmatrix} 25 & 5 & 4 \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \\ \end{bmatrix} \nonumber \]

    Exercise 5

    Show that the nonsingular matrix

    \[\left\lbrack A \right\rbrack = \begin{bmatrix} 0 & 2 \\ 2 & 0 \\ \end{bmatrix} \nonumber \]

    cannot be decomposed into LU form.

    Exercise 6

    The LU decomposition of

    \[\lbrack A\rbrack = \begin{bmatrix} 4 & 1 & - 1 \\ 5 & 1 & 2 \\ 6 & 1 & 1 \\ \end{bmatrix} \nonumber \]

    is given by

    \[\begin{bmatrix} 4 & 1 & - 1 \\ 5 & 1 & 2 \\ 6 & 1 & 1 \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 1.25 & 1 & 0 \\ 1.5 & 2 & 1 \\ \end{bmatrix}\begin{bmatrix} ?? & ?? & ?? \\ 0 & ?? & ?? \\ 0 & 0 & ?? \\ \end{bmatrix} \nonumber \]

    Find the upper triangular matrix in the above decomposition?

    7: LU Decomposition Method for Solving Simultaneous Linear Equations (2025)

    FAQs

    What is the LU decomposition method of the system of linear equations? ›

    The LU decomposition was developed by Alan Turing as an alternative way carrying out Gaussian elimination through factorization of the coefficient matrix into a product of upper and lower triangular matrices, namely, A = LU [8] . The system is solved in two consecutive steps using the equations LY = B and UX = Y [9] .

    How do you calculate LU decomposition? ›

    LU decomposition is calculated using Gaussian elimination, where you transform a square matrix A into lower (L) and upper (U) triangular matrices by performing row operations while keeping track of the changes in separate matrices. This process is iterative and continues until A is fully decomposed.

    What is the algorithm for LU decomposition method? ›

    The LU decomposition algorithm

    This algorithm is based on writing A=LU A = L U in block form as: [a11a12a21bfA22]=[10ℓ21L22][u11u120U22]=[u11u12u11ℓ21(ℓ21u12+L22U22)].

    Is LU factorization the same as LU decomposition? ›

    Answer and Explanation: LU factorization is another name as LU decomposition, as the both titles indicate that a given matrix can be expressed in two smaller matrices, which include an upper triangular matrix and a lower triangular matrix. The product of these two matrices reveals the given matrix.

    What is the theorem of LU decomposition? ›

    Theorem 1 (Theorem on LU-Decomposition). If all n leading principal minors of the n × n matrix A are nonsingular, then A has an LU-decomposition. Theorem 2 (Cholesky Theorem on LLT -Factorization).

    What is the other name for LU decomposition method? ›

    In linear algebra, LU Decomposition, which is also known as lower-upper (LU) decomposition or matrix factorization, is a method to decompose a square method into the product of a lower and an upper triangular matrix.

    What is the formula of Lu? ›

    An LU decomposition of a matrix A is the product of a lower triangular matrix and an upper triangular matrix that is equal to A. A =   1 2 4 3 8 14 2 6 13   = LU where L =   1 0 0 L21 1 0 L31 L32 1   and U =   U11 U12 U13 0 U22 U23 0 0 U33  .

    Is LU decomposition only for square matrix? ›

    For matrices that are not square, LU decomposition still makes sense. Given an m × n matrix M, for example we could write M = LU with L a square lower unit triangular matrix, and U a rectangular matrix. Then L will be an m × m matrix, and U will be an m × n matrix (of the same shape as M).

    What is the principle of the LU factorization method? ›

    1 LU Factorization using Gaussian Elimination. An n × n matrix A having nonsingular principal minors can be factored into LU: A = LU, where L is a lower triangular matrix with 1s along the diagonal (unit lower triangular) and U is an n × n upper triangular matrix. This factorization is known as an LU factorization of A ...

    What is the pivoting strategy for LU decomposition? ›

    The LU decomposition can fail when the top-left entry in the matrix A is zero or very small compared to other entries. Pivoting is a strategy to mitigate this problem by rearranging the rows and/or columns of A to put a larger element in the top-left position.

    What is the determinant of the LU decomposition? ›

    One application of LU decomposition in computing is in the computation of a determinant. The determinant is often computed by taking the product of the elements on the diagonal of both the L and U matrices. Since LU decomposition is quite efficient, this is a computationally efficient way of computing the determinant.

    How does decomposition method work? ›

    Decomposition is a general approach to solving a problem by breaking it up into smaller ones and solving each of the smaller ones separately, either in parallel or sequentially. (When it is done sequentially, the advantage comes from the fact that problem complexity grows more than linearly.)

    What are the requirements for LU decomposition? ›

    A square matrix is said to have an LU decomposition (or LU factorization) if it can be written as the product of a lower triangular (L) and an upper triangular (U) matrix. Not all square matrices have an LU decomposition, and it may be necessary to permute the rows of a matrix before obtaining its LU factorization.

    What are the limitations of LU decomposition? ›

    There are some limitations and restrictions associated with LU decomposition: Square matrices: LU decomposition is applicable only to square matrices (i.e., matrices with an equal number of rows and columns). Existence of LU decomposition: Not all square matrices have an LU decomposition.

    Is LU decomposition a direct method? ›

    LU decomposition is a direct method.

    What is the QR decomposition system of linear equations? ›

    In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix A into a product A = QR of an orthonormal matrix Q and an upper triangular matrix R.

    What is the decomposition method? ›

    Decomposition method is a generic term for solutions of various problems and design of algorithms in which the basic idea is to decompose the problem into subproblems. The term may specifically refer to: Decomposition method (constraint satisfaction) in constraint satisfaction.

    Top Articles
    Latest Posts
    Recommended Articles
    Article information

    Author: Kimberely Baumbach CPA

    Last Updated:

    Views: 5827

    Rating: 4 / 5 (61 voted)

    Reviews: 92% of readers found this page helpful

    Author information

    Name: Kimberely Baumbach CPA

    Birthday: 1996-01-14

    Address: 8381 Boyce Course, Imeldachester, ND 74681

    Phone: +3571286597580

    Job: Product Banking Analyst

    Hobby: Cosplaying, Inline skating, Amateur radio, Baton twirling, Mountaineering, Flying, Archery

    Introduction: My name is Kimberely Baumbach CPA, I am a gorgeous, bright, charming, encouraging, zealous, lively, good person who loves writing and wants to share my knowledge and understanding with you.