Brent’s Method
Source inspiration: (Mathew 2000-2019).
Description
Brent’s method, developed by Richard Brent in 1971 based on earlier work by van Wijngaarden, Zonneveld, and Dijkstra (1963), Wilkinson (1967), Dekker (1969), and others, is a hybrid root-finding algorithm that guarantees convergence while achieving fast (superlinear) convergence whenever possible. It requires only that the function \(f\) be continuous on \([a, b]\) with \(f(a)\) and \(f(b)\) of opposite sign — no derivatives are needed.
The method maintains a bracket \([a, b]\) that always contains a root, and intelligently selects among three sub-methods at each step:
- Inverse quadratic interpolation — fits a quadratic through the three most recent points \((a, f(a))\), \((b, f(b))\), \((c, f(c))\) and uses the \(x\)-intercept of that quadratic. Converges superlinearly near a simple root.
- Secant method (regula falsi) — uses a chord through the two most recent points as a fallback when inverse quadratic interpolation cannot be used.
- Bisection — used as a safeguard whenever neither of the above produces a sufficiently good step; guarantees the bracket shrinks by at least half.
State Variables
At a typical step, Brent’s method tracks three points — \(a\), \(b\), \(c\) — satisfying:
\[f(b) \cdot f(c) \leq 0, \qquad |f(b)| \leq |f(c)|\]
so that \(b\) is the current best approximation to the root and \(a\) is the previous value of \(b\). The root is guaranteed to lie in either \([b, c]\) or \([a, b]\).
Inverse Quadratic Interpolation Formula
Given function values \(y_a = f(a)\), \(y_b = f(b)\), \(y_c = f(c)\), the next estimate is:
\[s = \frac{y_b \, y_c}{(y_a - y_b)(y_a - y_c)}\,a + \frac{y_a \, y_c}{(y_b - y_a)(y_b - y_c)}\,b + \frac{y_a \, y_b}{(y_c - y_a)(y_c - y_b)}\,c\]
This step is accepted only if \(s\) falls inside the current bracket and the method is making adequate progress; otherwise bisection or secant is used instead.
Secant Method Formula
When only two distinct function values are available (or IQI is rejected), the method falls back to the secant step:
\[s = b - f(b) \cdot \frac{b - a}{f(b) - f(a)}\]
Convergence
- Guaranteed: At worst, the method halves the bracket at every other step (like bisection), ensuring \(O(2^{-n})\) convergence.
- Fast case: When IQI applies consistently near a simple root, convergence is superlinear with order approximately 1.839 (the positive real root of \(t^3 = t^2 + t + 1\)).
- Robustness: Unlike the secant method or IQI alone, Brent’s method cannot fail to converge as long as the initial bracket is valid and \(f\) is continuous.
Comparison with Constituent Methods
| Method | Guaranteed? | Convergence order | Derivatives needed? |
|---|---|---|---|
| Bisection | Yes | 1 (linear) | No |
| Regula Falsi | Yes (for sign change) | 1 (linear; slow for flat \(f\)) | No |
| Secant | No | \(\approx 1.618\) (superlinear) | No |
| Inverse Quadratic | No | \(\approx 1.839\) | No |
| Brent’s Method | Yes | Up to \(\approx 1.839\) | No |
Animations
Each animation below shows the bracket-narrowing diagram for Brent’s method. The shaded region marks the current bracket \([a, b]\) known to contain a root. The red dot tracks \(b\), the best current approximation (the endpoint with the smallest \(|f|\) value). The step annotation indicates which sub-method — Bisection, Secant, or IQI — was selected at that iteration.
Julia source scripts that generated these animations are linked under each case.
Case 1 — Polynomial root, \(f(x) = 4x^3 - 16x^2 + 17x - 4\), \([a_0, b_0] = [0, 1]\)
Behavior: \(f(0) = -4 < 0\) and \(f(1) = 1 > 0\), so the IVT guarantees a root in \([0, 1]\). The true root is \(x^* \approx 0.3268\). Brent’s method begins with secant steps and quickly switches to IQI, converging superlinearly. This illustrates the method’s efficiency near a simple root — far fewer iterations than bisection or regula falsi would require.

Case 2 — Transcendental equation, \(f(x) = \tan(x) - 2x\), \([a_0, b_0] = [0.5, 1.5]\)
Behavior: The non-trivial root of \(\tan(x) = 2x\) lies at \(x^* \approx 1.1656\). \(f(0.5) = \tan(0.5) - 1 \approx -0.454 < 0\) and \(f(1.5) = \tan(1.5) - 3 \approx 11.1 > 0\). Brent’s method converges in a handful of steps, while the Regula Falsi method requires ≈ 243 iterations on this same interval — because \(f(1.5)\) is so large that the chord always lands near \(x = 0.5\), leaving the right endpoint stationary. Brent’s safeguards against exactly this pathology.
