Graeffe’s Method

Numerical Methods

Source inspiration: (Mathew 2000-2019).

Description

Graeffe’s method, independently discovered by Karl Heinrich Graeffe (1799–1873), Germinal Pierre Dandelin (1794–1847), and Nikolai Ivanovich Lobachevsky (1792–1856), is a root-finding algorithm specifically for polynomials. Unlike iterative methods such as Newton-Raphson that track a single root, Graeffe’s method finds all roots simultaneously by transforming the polynomial into one whose roots are more widely separated, then reading off approximations directly from the coefficients.

Root Squaring

The core idea is the root-squaring transformation. Given a degree-\(n\) polynomial

\[p(x) = a_n x^n + a_{n-1} x^{n-1} + \cdots + a_0\]

with roots \(r_1, r_2, \ldots, r_n\), define \(p_1\) to be the polynomial of degree \(n\) whose roots are \(r_1^2, r_2^2, \ldots, r_n^2\). This is constructed via

\[p_1(x) = (-1)^n \, p\!\left(\sqrt{x}\right) p\!\left(-\sqrt{x}\right).\]

Applying this transformation \(m\) times yields a polynomial \(p_m\) with roots \(r_1^{2^m}, r_2^{2^m}, \ldots, r_n^{2^m}\). Even if the original roots are only mildly different in magnitude, after enough squarings the roots of \(p_m\) become widely separated, because the ratios are raised to the power \(2^m\).

Separated Root Theorem

Once the roots are sufficiently separated in magnitude — meaning \(|r_1^{2^m}| \gg |r_2^{2^m}| \gg \cdots \gg |r_n^{2^m}|\) — the separated root theorem (Vieta’s formulas applied term by term) gives

\[|r_k|^{2^m} \approx \left|\frac{c_{n-k+1}}{c_{n-k}}\right|, \qquad k = 1, 2, \ldots, n,\]

where \(c_0, c_1, \ldots, c_n\) are the coefficients of \(p_m\). The magnitude of each original root is then recovered as

\[|r_k| \approx \left|\frac{c_{n-k+1}}{c_{n-k}}\right|^{1/2^m}.\]

The sign (or complex argument) of each root is determined by evaluating \(p\) at the candidate magnitude.

Convergence and Limitations

For polynomials with distinct real roots, the approximation error decreases quadratically with each root-squaring step — the number of accurate digits roughly doubles per iteration (similar to Newton-Raphson).

However, the method has a serious practical limitation: the coefficients of \(p_m\) grow exponentially as \(|r_k|^{2^m}\), rapidly exceeding the range of standard floating-point arithmetic. Most software cannot apply more than a handful of iterations before overflow. The method was popular in the 19th and early 20th centuries when it was implemented with extended-precision hand calculation or symbolic algebra systems; today it is primarily of historical and educational interest.

Special cases — double roots, complex conjugate root pairs, roots equal in magnitude — require modified formulas and careful coefficient analysis.

Animations

Each animation below shows how the Separated Root Theorem estimates of each root’s magnitude evolve over successive Graeffe squaring iterations \(m = 0, 1, 2, \ldots\). Dashed horizontal lines mark the true root magnitudes; colored traces show the estimates converging toward (or slowly approaching) those targets as \(m\) increases.

Julia source scripts that generated these animations are linked under each case.

Case 1 — Distinct real roots, \(f(x) = (x+1)(x-2)(x+3)(x-4)\), true \(|r_k| \in \{1,2,3,4\}\)

Behavior: All four estimates converge quadratically — the error roughly squares at each step because \(|r_k/r_{k+1}|^{2^m} \to \infty\) exponentially, pushing the coefficient ratios cleanly toward the individual root magnitudes. Even though the roots are initially close (\(|r_1|/|r_2| = 4/3\)), a few squaring steps are enough to separate them.

Julia source

Graeffe convergence animation for a degree-4 polynomial with four distinct real roots −3, −1, 2, 4. Each colored trace shows one root-magnitude estimate growing from its initial poor approximation toward the dashed true-value line as the squaring iteration count increases.

Case 2 — Repeated real root at \(x = 5\), \(f(x) = (x-1)(x+2)(x-5)^2\), true \(|r_k| \in \{1,2,5,5\}\)

Behavior: The two estimates for the double root (both targeting \(|r| = 5\)) converge linearly — the error is halved each iteration rather than squared — because the separated-root condition \(|r_k|^{2^m} \gg |r_{k+1}|^{2^m}\) is never satisfied when two roots share the same magnitude. The two simple roots (\(|r| = 1\) and \(|r| = 2\)) converge quadratically as usual. This contrast in convergence rate is the diagnostic signature of a repeated root.

Julia source

Graeffe convergence animation for a degree-4 polynomial with a double root at x=5 and simple roots at x=1 and x=−2. The two blue traces for the double root approach the dashed line at 5 slowly (linear convergence), while the crimson and orange traces for the simple roots converge rapidly (quadratic convergence).

References

Mathew, John H. 2000-2019. Numerical Analysis - Numerical Methods Modules. https://web.archive.org/web/20190808102217/http://mathfaculty.fullerton.edu/mathews/n2003/NumericalUndergradMod.html.