Skip to main content

Documentation Index

Fetch the complete documentation index at: https://prod-mint.classiq.io/llms.txt

Use this file to discover all available pages before exploring further.

View on GitHub

Open this notebook in GitHub to run it yourself
The Jacobi–Anger expansion is a classical mathematical identity that expresses complex exponentials as infinite series of Bessel functions [1]. In quantum computing it plays a central role: it is the tool that lets us approximate the time-evolution operator eiHte^{-iHt} as a polynomial in HH, which is the bridge that makes block-encoding-based Hamiltonian simulation possible. This tutorial covers the expansion formulas, their Chebyshev polynomial form, and how the required polynomial degree scales with the evolution time tt and the target approximation error ϵ\epsilon. For the three quantum algorithms that build on this expansion, see:

The Expansion

The most general form of the Jacobi–Anger expansion [1] gives: eitcos(x)=k=ikJk(t)eikx,(1)e^{it\cos(x)} = \sum_{k=-\infty}^{\infty} i^k J_{k}(t)\, e^{ikx}, \qquad (1) eitsin(x)=k=Jk(t)eikx,(2)e^{it\sin(x)} = \sum_{k=-\infty}^{\infty} J_{k}(t)\, e^{ikx}, \qquad (2) from which we can derive Chebyshev polynomial series for the real-valued functions: cos(xt)=J0(t)+2k=1d/2(1)kJ2k(t)T2k(x),(3)\cos(xt) = J_0(t) + 2\sum_{k=1}^{d/2} (-1)^k J_{2k}(t)\, T_{2k}(x), \qquad (3) sin(xt)=2k=0d/2(1)kJ2k+1(t)T2k+1(x),(4)\sin(xt) = 2\sum_{k=0}^{d/2} (-1)^k J_{2k+1}(t)\, T_{2k+1}(x), \qquad (4) where Jk(x)J_k(x) is the Bessel function of the first kind of order kk, and Tk(x)T_k(x) is the Chebyshev polynomial of order kk. Eq. (1) is directly used in the GQSP method (applied to the walk operator). Eqs. (3)–(4) are used by both the QSVT method and the Qubitization method.

Truncation and Error Bound

The infinite series in Eqs. (3) and (4) can be truncated at degree dd, giving polynomial approximations of cos(xt)\cos(xt) and sin(xt)\sin(xt). The required degree for a target approximation error ϵ\epsilon and evolution time tt is: d=O ⁣(t+log(1/ϵ)log ⁣(e+log(1/ϵ)t)).(5)d = O\!\left(t + \frac{\log(1/\epsilon)}{\log\!\Big(e+\frac{\log(1/\epsilon)}{t}\Big)}\right). \qquad (5) This scaling, linear in tt and logarithmic in 1/ϵ1/\epsilon, is optimal: it matches the quantum query complexity lower bound for Hamiltonian simulation [2]. This is one of the key reasons the block-encoding family of algorithms is asymptotically optimal. Classiq’s QSP application includes all 5 formulas above. Next we demonstrate the approximation for a given evolution time tt and error ϵ\epsilon, for the expansion of the function cos(xt)\cos(xt) and sin(xt)\sin(xt) (Eqs. (3) and (4) above).
import matplotlib.pyplot as plt
import numpy as np

from classiq.applications.qsp.qsp import (
    poly_jacobi_anger_cos,
    poly_jacobi_anger_degree,
    poly_jacobi_anger_sin,
)

EVOLUTION_TIME = 22
EPS = 1e-7

degree = poly_jacobi_anger_degree(EPS, EVOLUTION_TIME)
print(f"Polynomial degree for t={EVOLUTION_TIME}, ε={EPS}: d = {degree}")
Output:

Polynomial degree for t=22, ε=1e-07: d = 40
  

Approximation Quality

We can visually inspect the approximation quality for t=22t = 22 and ϵ=107\epsilon = 10^{-7}:
cos_coeffs = poly_jacobi_anger_cos(degree, EVOLUTION_TIME)
sin_coeffs = poly_jacobi_anger_sin(degree, EVOLUTION_TIME)

xs = np.linspace(0, 1, 100)
cos_approx = np.polynomial.Chebyshev(cos_coeffs)(xs)
sin_approx = np.polynomial.Chebyshev(sin_coeffs)(xs)

fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 5))

ax1.plot(xs, np.cos(EVOLUTION_TIME * xs), "-r", linewidth=3, label="cos")
ax1.plot(xs, cos_approx, "--k", linewidth=2, label="approx. cos")
ax1.plot(xs, np.sin(EVOLUTION_TIME * xs), "-y", linewidth=3, label="sin")
ax1.plot(xs, sin_approx, "--b", linewidth=2, label="approx. sin")
ax1.set_ylabel(r"$\cos(x)\,\,\, , \sin(x)$", fontsize=16)
ax1.set_xlabel(r"$x$", fontsize=16)
ax1.tick_params(labelsize=16)
ax1.legend(loc="lower left", fontsize=14)
ax1.set_xlim(-0.1, 1.1)
ax1.set_title("Approximation", fontsize=16)

ax2.plot(
    xs, np.cos(EVOLUTION_TIME * xs) - cos_approx, "-r", linewidth=3, label="cos error"
)
ax2.plot(
    xs, np.sin(EVOLUTION_TIME * xs) - sin_approx, "-y", linewidth=3, label="sin error"
)
ax2.set_ylabel("Error", fontsize=16)
ax2.set_xlabel(r"$x$", fontsize=16)
ax2.tick_params(labelsize=16)
ax2.yaxis.get_offset_text().set_fontsize(16)
ax2.legend(loc="lower left", fontsize=14)
ax2.set_xlim(-0.1, 1.1)
ax2.set_title("Approximation Error", fontsize=16)

plt.tight_layout();
output We can see that indeed, the approximations follow the exact formulas.

See Also

This expansion is the mathematical foundation used by each of the three Hamiltonian simulation notebooks in this directory:

References

[1]: Jacobi–Anger Expansion (Wikipedia) [2]: Martyn, J. M., Rossi, Z. M., Tan, A. K., & Chuang, I. L. Grand unification of quantum algorithms. PRX Quantum 2, 040203 (2021).