Conveners
Afternoon Session: Afternoon Session (Mon 1)
- Valeria Simoncini (Universita' di Bologna)
Afternoon Session: Afternoon Session (Tue)
- Francoise Tisseur (The University of Manchester)
Afternoon Session: Afternoon Session (Tue 2)
- Bernard Beckermann
Afternoon Session: Afternoon Session (Thur 1)
- Peter Benner (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg, Germany)
Afternoon Session: Afternoon Session (Thur 2)
- Vanni Noferini
We are concerned with the solution of linear operator equations with a compact operator. These operators do not have a bounded inverse and therefore these kinds of equations have to be regularized before solution. The Arnoldi process provides a convenient way to reduce a compact operator to a nearby operator of finite rank and regularization is achieved with
Tikhonov’s method. We investigates...
We investigate explicit expressions for the error associated with the block rational Krylov approximation of matrix functions. Two formulas are proposed, both derived from characterizations of the block FOM residual. The first formula employs a block generalization of the residual polynomial, while the second leverages the block collinearity of the residuals. A posteriori error bounds based on...
In this talk, we present an extension of the Maximization-Minimization Generalized Krylov Subspace (MM-GKS) method for solving \ell_p-\ell_q minimization problems, as proposed in [1], by introducing a right preconditioner aimed at accelerating convergence without compromising the quality of the computed solution. The original MM-GKS approach relies on iterative reweighting and projection onto...
We investigate rank revealing factorizations of $m \times n$ polynomial matrices $P(\lambda)$ into products of three, $P(\lambda) = L(\lambda) E(\lambda) R(\lambda)$, or two, $P(\lambda) = L(\lambda) R(\lambda)$, polynomial matrices. Among all possible factorizations of these types, we focus on those for which $L(\lambda)$ and/or $R(\lambda)$ is a minimal basis, since they have favorable...
We will analyze the generic change of the Weierstra\ss\ Canonical Form of regular complex structured matrix pencils under
generic structure-preserving additive low-rank perturbations. Several different symmetry structures are considered
and it is shown that, for most of the structures, the generic change in the eigenvalues is analogous
to the case of generic perturbations that ignore the...
Reduced rank extrapolation (RRE) [1,2] can be used to accelerate convergent vector sequences. These sequences are often generated by an iterative process to solve algebraic equations.
In this presentation, I discuss the generalization of this extrapolation framework to sequences of low-rank matrices which are generated by iterative methods for large-scale matrix equations, such as, e.g., ...
Multiple orthogonal polynomials (MOP's) arise in various applications, including approximation theory, random matrix theory, and numerical integration. To define MOP's, one needs multiple inner products, leading to two types of MOP’s, which are mutually biorthogonal. These MOP's satisfy recurrence relations, which can be linked to linear algebra, via discretization. As a result we get an...
The idea of Generalized Locally Toeplitz (GLT) sequences has been introduced as a generalization both of classical Toeplitz sequences and of variable coefficient differential operators and, for every sequence of the class, it has been demonstrated that it is possible to give a rigorous description of the asymptotic spectrum in terms of a function (the symbol) that can be easily identified....
We propose a non-intrusive model order reduction technique for stochastic differential equations with additive Gaussian noise. The method extends the operator inference framework and focuses on inferring reduced-order drift and diffusion coefficients by formulating and solving suitable least-squares problems based on observational data. Various subspace constructions based on the available...
We discuss an evolving low-rank approximation of the vorticity solution of the Euler equations describing the flow of a two-dimensional incompressible ideal fluid on the sphere. Such problem can be approximated by the so-called Zeitlin model, an isospectral Lie-Poisson flow on the Lie algebra of traceless skew-Hermitian matrices. We propose an approximation of Zeitlin's model based on a...
We study the numerical solution of non-autonomous linear ODEs of the form
$$ \frac{d}{dt} \tilde{u}(t) = \tilde{A}(t)\tilde{u}(t), \quad \tilde{u}(a) = v,$$
where $\tilde{A}(t) \in \mathbb{C}^{N \times N}$ is analytic and often takes the form
$$ \tilde{A}(t) = \sum_{j=1}^k A_j f_j(t),$$
with large, sparse constant matrices $A_j$ and scalar analytic functions $f_j(t)$. Such equations...
In this talk, I will describe several recent results that use low-rank approximation and machine learning together. There are two stories that I will discuss:
- Using low-rank approximation to speedup machine learning applications (hybrid models)
- Using machine learning algorithms to compute certain classes of low-rank decomposition algorithms (including a AlphaEvolve and AlphaTensor...
The CUR approximation of a matrix is attractive in that once the column and row indices are chosen, one can obtain a low-rank approximation without even looking at the whole matrix.
Its computation had previously been unattractive, often starting with the SVD to get reliable pivots. A remarkable paper by Osinsky shows that this is unnecessary, rendering CUR a practical tool in terms of...