20–21 Jan 2025
Aula Magna "Fratelli Pontecorvo", Building E, Polo Fibonacci. Pisa
Europe/Rome timezone

Lanczos with compression for symmetric eigenvalue problems

20 Jan 2025, 16:00
2h
Building E (Aula Magna "Fratelli Pontecorvo", Building E, Polo Fibonacci. Pisa)

Building E

Aula Magna "Fratelli Pontecorvo", Building E, Polo Fibonacci. Pisa

Largo Bruno Pontecorvo 3, 56127 Pisa (Building E)

Speaker

Angelo Alberto Casulli (Gran Sasso Science Institute)

Description

This work focuses on computing a few smallest or largest eigenvalues and their corresponding eigenvectors of a large symmetric matrix $A$.

Krylov subspace methods are among the most effective approaches for solving large-scale symmetric eigenvalue problems. Given a starting vector $\mathbf{q}_1$, the $M$-th Krylov subspace is generated by repeatedly multiplying a general square matrix $A$ with $\mathbf{q}_1$:

$$\mathcal{K}_{M}(A,\mathbf{q}_1) := \mathrm{span} \{\mathbf{q}_1, A\mathbf{q}_1, \dotsc, A^{M-1}\mathbf{q}_1\}.$$ An orthonormal basis $Q_{M}$ for the Krylov subspace $\mathcal{K}_{M}(A,\mathbf{q}_1)$ is constructed using the Lanczos process. The Rayleigh--Ritz process is then applied to $Q_{M}$ to extract eigenvalue and eigenvector approximations, known as Ritz values and Ritz vectors. A significant challenge of Krylov subspace methods is the need to store $Q_{M}$. For slow convergence (with respect to $M$), available memory may be exhausted before achieving satisfactory approximations. Popular algorithms for large-scale eigenvalue problems address this issue by combining the Lanczos process with restarting. As an alternative to restarting, this work proposes a novel compression approach based on Rational Krylov subspaces associated with small matrices, limiting the Lanczos method's memory requirements. To provide intuition for our approach, suppose the spectrum of $A$ is ordered such that: $$\lambda_1 \le \cdots \le \lambda_m < \tau < \lambda_{m+1} \le \cdots \le \lambda_n,$$ where a shift $\tau$ separates the smallest $m \ll n$ eigenvalues to be computed from the rest. Let $\chi_\tau(x)$ denote the step function that equals $1$ for $x < \tau$ and $0$ otherwise. In theory, $\chi_\tau(A)\mathbf{q}_1$ serves as a perfect filter: the Lanczos method with this starting vector would yield the exact eigenvalues $\lambda_1, \ldots, \lambda_m$ after $m$ steps for a generic $\mathbf{q}_1$. In practice, however, evaluating $\chi_\tau(A)\mathbf{q}_1$ exactly is computationally infeasible. Instead, a rational approximation of $\chi_\tau$ can be used to compress $\mathcal{K}_{M}(A,\mathbf{q}_1)$. This approach parallels the developments in [2], which employs a rational approximation of a general function $f$ to compress the Lanczos method for approximating $f(A)\mathbf{b}$. To make this compression practical for eigenvalue problems, several additional components are required. Among these, a reorthogonalization procedure is essential, as numerical loss of orthogonality can lead to a failure in convergence.

We present a series of numerical experiments involving matrices from various applications. These experiments demonstrate that, in terms of matrix-vector products, our new method is consistently competitive with or superior to the Krylov--Schur method, often achieving significant improvements.

References

  1. A. C., D. Kressner, N. Shao. Lanczos with compression for symmetric
    eigenvalue problems. In preparation.

  2. A. C., I. Simunec. A low-memory Lanczos method with rational Krylov
    compression for matrix functions. arXiv, 2024.

Primary authors

Angelo Alberto Casulli (Gran Sasso Science Institute) Daniel Kressner (EPFL) Nian Shao (EPFL)

Presentation materials

There are no materials yet.