Speaker
Description
Given a set of matrices $A_i \in \mathbb{C}^{n \times n}$ and a set of analytic functions $f_i : \mathbb{C} \mapsto \mathbb{C}$, we consider a regular matrix-valued function $\mathcal{F}(\lambda)=\sum_{i=0}^d f_i(\lambda) A_i$, that is $\det \left( \mathcal{F}(\lambda) \right)$ is not identically zero for $\lambda \in \mathbb{C}$. An interesting problem consists in the computation of the nearest singular function $\widetilde{\mathcal{F}}(\lambda)=\sum_{i=0}^d f_i(\lambda) \left( A_i + \Delta A_i \right)$, with respect to the Frobenius norm. For example, this problem has particular importance in the context of delay differential algebraic equations, where a function in the form $\mathcal{D}(\lambda)=\lambda E - A - B e^{-\tau \lambda}$ is studied. Indeed in this setting and in presence of small delays $\tau$, the ill posedness of the problem may be connected with the numerical singularity of the function $\mathcal{D}(\lambda)$, even if the pencil $\lambda E -A$ is regular. We will provide a general overview of the problem, describing the possible issues connected with the lack of robustness of the differential equation, associated with destabilizing perturbations of $\mathcal{D}(\lambda)$. Moreover we propose a method for the numerical approximation of the function $\mathcal{F}(\lambda)$, which rephrases the matrix nearness problem for the matrix-valued function into an equivalent optimization problem. Nevertheless this problem turns out to be highly non-convex. To solve it, we propose a two level procedure, which introduces a constrained gradient system of differential equations in the inner iteration and a Newton-like method for the optimization of the perturbation size in the outer one. This is a joint work with Nicola Guglielmi (GSSI).