Speaker
Description
In numerous fields of science and engineering we are faced with problems
of the form $$A{x}+{\eta}=b^\delta,$$ where $A\in\mathbb{R}^{m\times n}$
is a large and severely ill-conditioned matrix, i.e., such that its
singular values decay rapidly to zero without any significant gap
between the smallest ones, ${x}\in\mathbb{R}^n$ is the unknown quantity
we wish to recover, ${\eta}\in\mathbb{R}^m$ collects the unknown
measurement and discretization errors, and is often referred to as
noise, and ${b}^\delta\in\mathbb{R}^m$ represents the measured and
perturbed data. We will assume that the noise that corrupts the data is
white Gaussian. Problems of this form are referred to as discrete
ill-posed inverse problems. Due to the ill-conditioning of $A$ and the
presence of the noise ${\eta}$, the naive solution of the system of
equations above is a poor approximation of the desired one.
To reduce the sensitivity of the problem to the presence of noise, we
consider the $\ell^2-\ell^q$ regularization
$$\arg\min_{{x}}\frac{1}{2}\left\|A{x}-{b}^\delta\right\|_2^2+\frac{\mu}{q}\left\|L{x}\right\|_q^q,$$
where $0<q\leq2$, $\mu>0$ is the so-called regularization parameter,
$L\in\mathbb{R}^{s\times n}$ is referred to as regularization operator,
and $||z||_q^q=\sum_{i=1}^{n}\lvert z_i \rvert^q.$ We refer to latter
quantity as $\ell^q$-norm for any $0<q\leq 2$, even though for $q<1$ it
is not a norm as it does not satisfy the triangular inequality; see,
e.g., [2] for a discussion on $\ell^p-\ell^q$ regularization and [1]
for a software implementation.
To further improve the quality of the computed solutions and to provide
a theoretically sound convergence analysis, we consider the refinement
strategy $$\begin{cases}
{r}^{(k)}={b}^\delta-A{x}^{(k)},\\
{h}^{(k)}= \arg\min_{{h}}\frac{1}{2}\left\|A{h}-{r}^{(k)}\right\|_2^2+\frac{\mu}{q}\left\|L{h}\right\|_q^q,\\
{x}^{(k+1)}={x}^{(k)}+{h}^{(k)}.
\end{cases}$$ We show that, under reasonable assumptions, the iterations
are stopped after finitely many steps, if the Discrepancy Principle is
employed as stopping criterion, and that the resulting method is a
regularization method. Selected numerical results show the good
performances of the proposed approach.
References
- A. Buccini and L. Reichel, Software for limited memory restarted $\ell^p-\ell^q$ minimization methods using generalized Krylov subspaces, Electronic Transactions on Numerical Analysis, 61, 66-91 (2024).
- A. Buccini, M. Pragliola, L. Reichel, and F. Sgallari, A comparison of parameter selection rules for $\ell^p-\ell^q$ minimization, Annali dell'Università di Ferrara, 68, 441-463 (2022).