August 31, 2025 to September 5, 2025
Palazzone di Cortona
Europe/Rome timezone

On computing the zeros of multiple and Gegenbauer-Sobolev orthogonal polynomials

Sep 5, 2025, 9:00 AM
30m
Palazzone di Cortona

Palazzone di Cortona

52044 Le Contesse, Province of Arezzo

Speaker

Nicola Mastronardi (Istituto per le Applicazioni del Calcolo, CNR)

Description

Multiple orthogonal polynomials (MOPs) are a generalization of orthogonal polynomials.
They originally appeared in Hermite-Padè approximation (simultaneous
rational approximation) and number theory. Recently, they turned out to be very useful in random matrix theory, combinatorics and Markov chains.
Given $ r $ weight functions $w^{(i)}(x) \ge 0,$ with support $ \Delta^{(i)}, \; i=1,\ldots,r,$ on the real line, the sequence of MOPs $ \left\{p_n(x)\right\}_{ n=0}^{\infty} $ satisfies the following orthogonality conditions:
$$ \int_{\Delta^{(i)}} p_{n}(x) x^k w^{(i)}(x)dx = 0, \quad 0\le k \le n_{i}-1, $$ with $ n= \sum_{i=1}^r n_i. $ A set of MOPs satisfies an $(r+2)$-term recurrence relation. Without loss of generality, % in this paper we focus on the case $r=2.$ Therefore, the set of monic MOPs satisfies a $4$-term recurrence relation $$ xp_{i}(x) = a_i p_{i+1}(x) + b_i p_{i}(x) + c_i p_{i-1}(x) + d_i p_{i-2}(x), \quad a_i=1,\;\; i=0, \ldots, n-1, $$ with $p_{-2}(x)=p_{-1}(x)=0.$ Writing in a matrix form, we obtain $$ H_n \left[\begin{array}{c} p_0(x) \\ p_1 (x) \\ \vdots \\ p_{n-1}(x) \end{array}\right] + a_{n-1} \left[\begin{array}{c} 0 \\ \vdots \\ 0 \\p_n (x) \end{array}\right] = x \left[\begin{array}{c} p_0(x) \\ p_1 (x) \\ \vdots \\ p_{n-1}(x) \end{array}\right], $$ where $ H_n $ is the $n\times n$ banded lower Hessenberg matrix with 2 sub-diagonals and one upper-diagonal: $$ H_n := \left[\begin{array} {ccccccc} b_0 & a_0 & 0 & 0 & 0 & \ldots & 0 \\ c_1 & b_1 & a_1 & 0 & 0 & \ldots & 0 \\ d_2 & c_2 & b_2 & a_2 & 0 & \ldots & 0 \\ 0 & d_3 & c_3 & b_3 & a_3 & \ddots & 0 \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & 0 \\ 0 & \ldots & 0 & d_{n-2} & c_{n-2} & b_{n-2} & a_{n-2} \\ 0 & \ldots & 0 & 0 & d_{n-1} & c_{n-1} & b_{n-1} \\ \end{array} \right]. $$ The zeros of $ p_n(x)$ are the eigenvalues of $ H_n, $ and they are the nodes of the Simultaneous Gaussian quadrature rules associated with MOPs. Gegenbauer-Sobolev polynomials $p_n(x), \; n=0,1,\ldots, $ are orthogonal with respect to the Sobolev inner product $$ \left( p,q \right)_S = \int_{-1}^{1} v(x) w(x) \omega^{(\alpha)}(x) dx +\gamma \int_{-1}^{1} v'(x) w'(x) \omega^{(\alpha-\frac{1}{2})}(x)dx, \; \gamma >0,\;\alpha > -\frac{1}{2}. $$ The monic Gegenbauer-Sobolev polynomial $p_n(x) $ of degree $n\ge 0 $ is odd if $ n $ is odd, is even if $ n $ is even (odd-even property). The Althammer polynomial $ p_n(x)$ of degree $ n$ satisfies a long recurrence relation, whose coefficients can be arranged into a Hessenberg matrix of order $ n$, with eigenvalues the zeros of $ p_n(x)$. Unfortunately, the eigenvalues of the aforementioned Hessenberg matrices are very ill-conditioned, and standard balancing procedures do not improve their condition numbers. In this talk we describe fast and reliable algorithms for computing the zeros of the aforementioned polynomials. The first step of the algorithm is to construct a balancing matrix that, applied to $ H_n, $ drastically reduces the condition of the Hessenberg eigenvalue problem, thereby allowing to compute the associated simultaneous Gaussian quadrature rule in floating point arithmetic in a reliable way. Specifically, this step consists of determining a diagonal matrix $\Delta_n= {\tt diag}(\delta_1,\ldots,\delta_n)$, balancing $ H_n,$ i.e., $ \hat{H}_n= \Delta_n^{-1} H_n\Delta_n,$ such that $ {\tt triu}(\hat{H}_n,-1) $ is symmetric, where $ {\tt triu}(A, k)$ denotes the elements on and above the $k$th diagonal of $ A.$ The second step of the algorithm consists of computing the eigenvalue decomposition of $ \hat{H}_n$. This can be done by applying the $ QR$ algorithm to $ \hat{H}_n$, requiring $ \mathcal{O} (n^3) $ floating point operations and $ \mathcal{O} (n^2) $ memory. In order to reduce complexity and memory, for the second step we propose an alternative procedure based on a variant of the Aberth-Ehrlich method to compute the eigenvalues and associated left and right eigenvectors, requiring only $ {\mathcal O}(n)$ memory and $ {\mathcal O}(n^2)$ floating point operations.

References

Coussement, J., Van Assche, W.: Gaussian quadrature for multiple orthogonal polynomials, https://doi.org/10.1016/j.cam.2004.0(2005)
T. Laudadio, T., Mastronardi, N., Van Dooren, P.: Computing Gaussian quadrature rules with high relative accuracy, https://doi.org/10.1007/s11075-022-01297-9, (2023)
T. Laudadio, T., Mastronardi, N., Van Dooren, P.: On computing modified moments for half-range Hermite weights, https://doi.org/10.1007/s11075-023-01615-9, (2024)
T. Laudadio, T., Mastronardi, N., Van Dooren, P.: Computational aspects of simultaneous Gaussian
quadrature, https://doi.org/10.1007/s11075-024-01785-0(2024)
T. Laudadio, T., Mastronardi, N.,Van Assche, W., Van Dooren, P.: A Matlab package computing simultaneous Gaussian quadrature rules for multiple orthogonal polynomials, https://doi.org/10.1016/j.cam.2024.116109
(2024)
T. Laudadio, T., Mastronardi, N., Van Dooren, P.: Fast and reliable algorithms for computing the zeros
of Althammer polynomials, https://doi.org/10.1016/j.apnum.2024.09.004 (2025)

Primary author

Nicola Mastronardi (Istituto per le Applicazioni del Calcolo, CNR)

Presentation materials

There are no materials yet.