Consider saddle-point systems of the following form
where is large, sparse and symmetric positive definite and has full rank. Such systems arise in various scientific applications, including the finite element approximation of PDEs in which a constraint is enforced on a subset of unknowns via a global scalar Lagrange multiplier, numerical continuation methods for large nonlinear systems of equations, constrained optimization as well as in linear least squares problems [2].
Null-space methods represent one of possible solution approaches to solve such systems. Despite these solution approaches that compute a null-space basis of the undetermined block are traditional in various engineering communities and offer advantages in some situations, general techniques to obtain such bases useful for large and sparse have not been studied frequently in the past. There are more reasons for this. One of them may be that the resulting transformed system with the matrix is often dense, or ill-conditioned, or both. Another reason may be that there is a lack of efficient preconditioners for the transformed systems even when the projected matrix is reasonably sparse. The core of the talk is devoted to discussing various algorithms to find the null-space basis . Although in several of the approaches treatment of that does not have full-rank is not difficult, we do not treat such cases at length.
Initial motivation for studying the null-space approaches in this talk is slightly different from the symmetric saddle-point problem given above. Consider the linear least-squares (LS) problem
where the system matrix and the right-hand side are given. Assume that a part of the rows of is dense and the rows have been initially permuted such that the dense rows are the last rows of . If we assume a conformal partitioning of the vector we have
where denotes the number of sparse rows of , and is the number of dense rows, with , and .
The normal equations that describe the solution [3] (not representing always the best way to get the solution) become
where is the reduced normal matrix. The solution of (2) can be obtained from the equivalent system
This is the system nearly of the form (1) with . But the block is nonzero and equal to the negative unit matrix of the appropriate dimension. Provided has full column rank, is symmetric positive definite. But the null-space based approach can be extended even if the block is nonzero and only positive semidefinite. This positive definiteness is quite common in the LS problems even when the whole is of full column rank. The approach to solve (3) also in this case for the full rank LS problem (3) is motivated by the following simple lemma.
\textbf{Lemma}: Consider . If is of full column rank, then is positive definite on the null space of .
Success of any null-space approach then depends on constructing a suitable null-space basis that keeps the matrix sufficiently sparse and well-conditioned. Of course, it cannot be an orthogonal null-space basis computed, for example, by pivoted factorization of . We will
discuss other approaches to find the null-space basis heavily based on the fact that is wide.
That is, it has far fewer rows than columns. The approaches include computing an oblique basis computed from more separate QR factorizations and an approach that adds rows of one by one and extends the basis incrementally. Another approach for constructing is the right oblique
conjugation. Applied to it yields and lower trapezoidal satisfying
The null-space basis is then formed by the last columns of . This approach is directly related to Thesis of Michele Benzi [1]. Techniques implied by this Thesis have motivated the right oblique conjugation, and they may still offer further computational possibilities. Not only for constructing but possibly also for preconditioning of the linear systems with the matrix .
Our experimental results show that the null-space bases obtained from the mentioned approaches are of high quality and can push forward further research in optimization and solving constrained least squares. Linear least squares problems that contain a small number of dense rows arising from practical applications are used to illustrate our ideas and to explore their potential for solving large-scale systems. Our work was originally motivated by the paper by Howell [4]. A significant part of the obtained results has been published in [5].
References
[1] M. Benzi, A Direct Row-Projection Method For Sparse Linear Systems, Ph.D. Thesis, NCSU, 1993.
[2] M. Benzi, G.H. Golub, and J. Liesen, Numerical solution of saddle point problems, Acta Numerica,
14:1–137, 2005.
[3] ̊A. Bj ̈orck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, 1996.
[4] J. S. Howell, Prestructuring sparse matrices with dense rows and columns via null space methods,
Numerical Linear Algebra with Applications, 25:1–30, 2018.
[5] J. A. Scott and M. T ̊uma, A null-space approach for large-scale symmetric saddle point systems with
a small and non zero (2,2) block, Numerical Algorithms, published online, 2022.