We are concerned with the solution of linear operator equations with a compact operator. These operators do not have a bounded inverse and therefore these kinds of equations have to be regularized before solution. The Arnoldi process provides a convenient way to reduce a compact operator to a nearby operator of finite rank and regularization is achieved with
Tikhonov’s method. We investigates...
In this talk, we present an extension of the Maximization-Minimization Generalized Krylov Subspace (MM-GKS) method for solving \ell_p-\ell_q minimization problems, as proposed in [1], by introducing a right preconditioner aimed at accelerating convergence without compromising the quality of the computed solution. The original MM-GKS approach relies on iterative reweighting and projection onto...
We investigate rank revealing factorizations of $m \times n$ polynomial matrices $P(\lambda)$ into products of three, $P(\lambda) = L(\lambda) E(\lambda) R(\lambda)$, or two, $P(\lambda) = L(\lambda) R(\lambda)$, polynomial matrices. Among all possible factorizations of these types, we focus on those for which $L(\lambda)$ and/or $R(\lambda)$ is a minimal basis, since they have favorable...
We study the set $\Pi^*_{{2^m}}$, which is the set of polynomials that can be evaluated as a matrix polynomial (in the sense of [Higham, 2008]) using exactly $m$ matrix-matrix multiplications, with no restrictions on the number of scalar multiplications or matrix additions. Based on a tabular parameterization for this set we develop algebraic transformations that preserve the computed...
We will analyze the generic change of the Weierstra\ss\ Canonical Form of regular complex structured matrix pencils under
generic structure-preserving additive low-rank perturbations. Several different symmetry structures are considered
and it is shown that, for most of the structures, the generic change in the eigenvalues is analogous
to the case of generic perturbations that ignore the...
Rank-structured matrices are matrices with low-rank subblocks, typically off-diagonal blocks. A significant benefit of rank structures in contrast with traditional structures is that the former type is well preserved by usual matrix operations like summation, multiplication, diagonal scaling and shifting, etc. Sometimes, even factorization and inversion preserve the structures. It is well...
The idea of Generalized Locally Toeplitz (GLT) sequences has been introduced as a generalization both of classical Toeplitz sequences and of variable coefficient differential operators and, for every sequence of the class, it has been demonstrated that it is possible to give a rigorous description of the asymptotic spectrum in terms of a function (the symbol) that can be easily identified....
We propose a non-intrusive model order reduction technique for stochastic differential equations with additive Gaussian noise. The method extends the operator inference framework and focuses on inferring reduced-order drift and diffusion coefficients by formulating and solving suitable least-squares problems based on observational data. Various subspace constructions based on the available...
We discuss an evolving low-rank approximation of the vorticity solution of the Euler equations describing the flow of a two-dimensional incompressible ideal fluid on the sphere. Such problem can be approximated by the so-called Zeitlin model, an isospectral Lie-Poisson flow on the Lie algebra of traceless skew-Hermitian matrices. We propose an approximation of Zeitlin's model based on a...
In this talk, I will describe several recent results that use low-rank approximation and machine learning together. There are two stories that I will discuss:
- Using low-rank approximation to speedup machine learning applications (hybrid models)
- Using machine learning algorithms to compute certain classes of low-rank decomposition algorithms (including a AlphaEvolve and AlphaTensor...
The CUR approximation of a matrix is attractive in that once the column and row indices are chosen, one can obtain a low-rank approximation without even looking at the whole matrix.
Its computation had previously been unattractive, often starting with the SVD to get reliable pivots. A remarkable paper by Osinsky shows that this is unnecessary, rendering CUR a practical tool in terms of...
Sketching techniques have gained popularity in numerical linear algebra to accelerate the solution of least squares problems. The so-called $\varepsilon$-subspace embedding property of a sketching matrix $S$ has been largely used to characterize the problem residual norm, since the procedure is no longer optimal in terms of the (classical) Frobenius or Euclidean norm. By building on available...