Speaker
Description
In this work, we present a low-memory variant of the Lanczos algorithm for the solution of the Lyapunov equation
\begin{equation}
AX + XA = \mathbf{c}\mathbf{c}^T, \tag{1}
\end{equation}
where $A$ is a large-scale symmetric positive-definite matrix and $\mathbf{c}$ is a vector.
The classical Lanczos method consists in building an orthonormal basis $Q_M$ for the polynomial Krylov subspace
$$\mathcal{K}_M(A,\mathbf{c}) = span(\mathbf{c},A\mathbf{c}, \dots, A^{M-1}\mathbf{c})$$
and in approximating the solution $X$ with $Q_MY_MQ_M^T$, where $Y_M$ solves the projected equation
$$ Q_M^TAQ_M Y_M + Y_MQ_M^TAQ_M = (Q_M^T \mathbf{c}) (Q_M^T \mathbf{c})^T.$$
The Lanczos algorithm often requires a relatively large $M$ to obtain a good approximation of the solution, which can lead to memory issues due to the storage demands of $Q_M$. Furthermore, the solution $X$ can be well approximated by a low-rank matrix, whose rank is significantly smaller than $M$, i.e. the dimension of the polynomial Krylov subspace.
An alternative approach is to use a rational Krylov subspace instead of a polynomial one. Using the Zolotarev poles as the poles of the rational Krylov subspace, it is possible to approximate the solution $X$ by a low-rank matrix with the guarantee that the residual has norm smaller than a prescribed quantity ([$2$]). The rank of the computed approximate solution is usually close to the numerical rank of the real solution. The main drawback is that this method requires solving multiple shifted linear systems involving the matrix $A$, which is prohibitive if $A$ is large.
Mimicking the approach in [$3$], our method employs a polynomial Krylov subspace to approximate the solution of (1) while leveraging rational Krylov subspaces associated with small matrices to compress the Lanczos basis. This method accesses $A$ only through matrix-vector products and requires the storage of only a few vectors from the polynomial Krylov subspace instead of the entire Lanczos basis, producing an approximate solution whose rank is independent of the dimension of the involved polynomial Krylov subspace.
The computational cost of the proposed algorithm is dominated by the construction of the Lanczos basis, and the compression steps do not require additional matrix-vector products involving $A$. Furthermore, theoretical results demonstrate that the algorithm achieves an approximation error comparable to that of the standard Lanczos algorithm, with an additional error term that can be bounded a priori using Zolotarev numbers. In practice, this additional error is negligible compared to the Lanczos error.
Numerical experiments show that the behavior of the proposed algorithm is comparable to that of the Lanczos algorithm without reorthogonalization, both in terms of matrix-vector products and quality of the approximated solution. Comparisons with existing low-memory variants of the Lanczos method demonstrate competitive performance in terms of accuracy, computational cost, and runtime.
- A. A. Casulli, F. H., D. Kressner, Lanczos with compression for symmetric Lyapunov equations, In preparation.
- B. Beckermann, An error analysis for rational Galerkin projection applied to the Sylvester
equation, SIAM J. Numer. Anal., 49 (2011), pp. 2430--2450. - A. A. Casulli and I. Simunec. A low-memory Lanczos method with rational
Krylov compression for matrix functions, arXiv, 2024.