This implementation is based on the paper [1] and was written in collaboration with Chi-Fang (Anthony) Chen, the first author of the paper.Quantum thermal state preparation is the task of preparing the Gibbs state [2], which is the quantum state in a thermal equilibrium. In difference with ground state preparation, here one looks for properties of a quantum system which is coupled to an environment in a certain temperature.Besides simulating nature’s behaviour, it is an algorithmic primitive that can be used within other quantum algorithms, such as for solving semi-definitie programs [3].Formaly, the Gibbs state is a mixed-state, and defined as:σβ:=Ze−βH∝i∑e−βEi∣ψi⟩⟨ψi∣,Z:=tr[e−βH].Notice that since the state is a mixed-state, we describe using a density matrix.
The algorithm is a quantum version of the classical Markov chain Monte Carlo (MCMC) algorithm [4]. It is composed of the following steps:
Apply a random jump\transformation on the state from a set of jumps A (usually a local transformation, but not necessarily).
Measure the energy difference Δω.
Accept the jump with probability that is propotional to e−βΔω, otherwise reject the step and revert to the original state.
Instead of taking discrete jumps, it is possible to simulate a continous case by simulating evolution under a generator matrix (Laplacian L in the classical case, Linbladian L [5] in the quantum case). So the quantum algorithm is just a quantum simulation of a block-encoded Linbladian, which is designed to have a fixed point approximately at the Gibbs state.Any Linbladian can be written in the following form:L(ρ)=Coherent (unitary)dynamics−i[H,ρ]+Dissipative (non-coherent)dynamicsk∑(LkρLk†−21{Lk†Lk,ρ})where the evolution is according to:ρ(t)=etLβρ(0)with Lk being the Linblad (jump) operators.The algorithm engineers a Linbladian that consists only of dissipative terms.Each term A^a(ωˉ) is the Fourier transform of a jump operator in A:Lβ(ρ):=a∈A,ωˉ∈Sω0∑γ(ωˉ)(A^a(ωˉ)ρA^a(ωˉ)†−21{A^a(ωˉ)†A^a(ωˉ),ρ})
There are 2 main challenges in translating the classical monte carlo to a quantum version:
Energy uncertainty: measuring the energy with high resolution is with exponential cost due to energy-time uncertainty.
Rejection step: it turns out that rejecting is not trivial, as the accept involves measurements.
In order to deal with the first problem, the algorithm presentes the Operator Fourier Transform primitive, which estimates smoothly the energy difference, in a way that gurantees to reach approximated detailed balance [6]. In order to tackle the second problem, the algorithm takes advatange of mid-circuit measurements and the quantum Zeno effect [7], which turns out to work well with the Linbladian simulation.
The total simulation time will be O(ϵβtmix2(β)), where tmix(β) is the mixing time of the Linbladian, measuring the time it takes to different mixed states to be indistinguishable under the evolution of L.The mixing time is not trivial to calculate, and can be estimated in certain cases, such as [8].The algorithm might give exponential advantage for quantum systems that thermalize fast enough and have the sign problem [9] so that no efficient classical alternatives exist.
Here we define a very basic problem with just 2 qubits, and a diagonal hamiltonian, so that we will be able to implement the hamiltonian simulation exactly, and get results within the quantum simulator limits. We note that the algorithm does not gurantee improvement over classical monte carlo in the case of such hamiltonian, and we choose it for a didactic reason.The hamiltonian will be H = Z_1 Z_2 + Z_1 I =
\begin\{pmatrix\}
2 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & -2 & 0 \\
0 & 0 & 0 & 0
\end\{pmatrix\}.The second parameter for the problem is β=T1.Here we choose quite small value for fast convergence.
import numpy as npfrom classiq import *# Problem:HAMILTONIAN = [ PauliTerm(pauli=[Pauli.Z, Pauli.Z], coefficient=1), PauliTerm(pauli=[Pauli.Z, Pauli.I], coefficient=1),]SYSTEM_SIZE = len(HAMILTONIAN[0].pauli)MAX_ENERGY_SHIFT = 4 # a bound on the energy shift, to normalize the hamiltonian byBETA = 0.2
This is the heart of the algorithm.This building block is quite similar to the Quantum Phase Estimation. However, there are 2 main differences:
The initial state prepared for the phase register is a Gaussian distribution instead of a uniform one. It is used in order to have better guarantees on the error of the estimated energy. It is required because the time window is limited.
The circuit is measuring the energy difference that the operator is doing on eigenvalues instead of just the energy of each eigenvalue in superposition.
For example, take an operator O and hamiltoanian H with eigenvalues ∣λi⟩. If for O and ∣λ0⟩ it holds that O∣λ0⟩=∣λ1⟩+∣λ2⟩, then the output of the circuit for the input ∣λ0⟩∣0⟩ will be approximately: ∣λ1⟩∣ω1−ω0⟩+∣λ2⟩∣ω2−ω0⟩.Figure 2: Circuit for Operator Fourier Transform for an operator O acting on the state ρ, given hamiltonian H.Reproduced from [1] Fig.
First we define a gaussian state preparation function. We choose truncation value and σ in a quite arbitrary way now, qualitatively such that the gaussian trend will be within our truncation.
@qfuncdef hamiltonian_simulation(hamiltonian: CArray[PauliTerm], time: CReal, state: QArray): suzuki_trotter( hamiltonian, evolution_coefficient=time, order=1, repetitions=1, # we use just 1 repetition as we work with a diagonal hamiltonoan. qbv=state, )@qfuncdef controlled_hamiltonian_simulation( hamiltonian: CArray[PauliTerm], t0: CReal, ctrl: QArray, state: QArray): """ A controlled powered hamiltonian simulation, as done in QPE of hamiltonian simulation """ repeat( ctrl.len, lambda i: control( ctrl[i], lambda: hamiltonian_simulation(hamiltonian, 2**i * t0, state) ), )@qfuncdef operator_fourier_transform( hamiltonian: CArray[PauliTerm], t0: CReal, operator: QCallable[QArray], state: QArray, bohr_freq: QNum,): prepare_gaussian_state(bohr_freq) within_apply( lambda: controlled_hamiltonian_simulation(hamiltonian, t0, bohr_freq, state), lambda: operator(state), ) invert(lambda: qft(bohr_freq))
Identify the quantum variable bohr_freq as signed integer, ω=ω0⋅bohr_freq and t=t0⋅bohr_freq,
while ω0t0=N2π. It is analogous to the phase variable in Quantum Phase Estimation.Require that ∣∣H∣∣<2Nω0, so that the energy estimation won’t overflow. We also want to take advantage of the full resolution of the QFT, so we choose ω0 close to the bound.
Lets see how it works for the initial state ∣0⟩state∣0⟩freq and the operator O=H⊗H (where the H are hadamard gates operating on the state variable).
We expect jumps from the eigenvalue 2 to eigenvalues 0 and -2:
Quantum program link: https://platform.classiq.io/circuit/32pWF7FtAuB0u3g07eRAOX47YmG
Figure 3: Circuit for Operator Fourier Transform for the operator O=H⊗H as produced by the classiq platform.
import matplotlib.pyplot as pltdef plot_omega_results(result): w = [sample.state["w"] for sample in result.parsed_counts] shots = [sample.shots for sample in result.parsed_counts] # Scale the w values w = np.array(w) * w0 # Create the scatter plot. plt.scatter(w, shots, color="blue", alpha=0.7, edgecolors="none") plt.xlabel(r"$\Delta\omega$") plt.ylabel("Counts") plt.title("Measured Energy Difference") plt.grid(True) plt.xlim(-5, 5) plt.tight_layout() plt.show()plot_omega_results(res)
Now see what happens if we use a uniform distribution instead of a gaussian one as the initial state for the bohr_freq:
First we pick jump operators that will define our Linbladian dissipative part. It is enough to choose local operators, so here we just use all possible single-site pauli operators.
# For better mixing and ergodicity, the jumps should be “scrambling” and not commute with the Hamiltonian# (e.g., breaking the symmetries of the Hamiltonian).# In our case it is enough to just take X jumps, but we do also Y for demonstrationJUMP_OPERATORS = [ {"pauli": pauli, "index": index} for index in range(SYSTEM_SIZE) for pauli in [X, Y]]JUMP_VAR_SIZE = max(1, np.ceil(np.log2(len(JUMP_OPERATORS))))
Given a purely irreversible Linbladian:L[ρ]:=j∈J∑(LjρLj†−21Lj†Ljρ−21ρLj†Lj),With Linbladian operators Lj, it is possible to block-encode it with U such that:(⟨0b∣⊗I)U(∣0c⟩⊗I)=j∈J∑∣j⟩⊗Lj,In our case, each Linblad operator is effectively a fourier mode A^a(ωˉ) the operator Fourier transform of a pauli jump operator weighted by a boltzmann weight γ(ωˉ).A^a(ωˉ):=tˉ∈St0∑e−iωˉtˉf(tˉ)Aa(tˉ)for eacha∈A,ωˉ∈Sω0.(⟨0b∣⟨0∣boltz.⊗I)U(∣0b⟩∣0⟩boltz.∣0⟩freq∣0⟩jump⊗I)=a∈A,ωˉ∈Sω0∑γ(ωˉ)∣ωˉ,a⟩⊗A^a(ωˉ).We implement it by doing a single call to operator Fourier transform, and apply it on a block encoding of all pauli jumps, followed by Y rotation based on ωˉ.Here, we take A={X1,X2,Y1,Y2}.Figure 4: Circuit for block encoding the linbladian L.Reproduced from [1] Fig.
Given our block encoding U, and assuming that the Linbladian is purely irreversible, it is possible to evolve it for a timestep using a 1st order approximation: I+δL+O(δ2) using weak measurements (weak measurement is a measurement that reveals only small amount of information on the system, and does not collapse entirely the state).Actually the it exploits a quantum Zeno-like effect [7] that makes the corrections to the evolution on quadratic in δ!Note: This method is the 1st order approximation of the evolution.There are better scaling methods with higher order approximation to simulate the evolution of the Linbladian (see Appendix. F in the [1]).Figure 5: Circuit for an approximate δ-time step evolution of the linbladian L.Reproduced from [1] Fig.
We use the RESET function to reset the state of a single qubit (thus saving qubits).
Note that by using it, the circuit is not coherent anymore.The reset\discard operations act like nature’s heat bath.
from classiq.qmod.symbolic import asin@qfuncdef reset_var(qvar: QArray): apply_to_all(RESET, qvar)@qfuncdef delta_step( hamiltonian: CArray[PauliTerm], beta: float, w0: float, delta: CReal, delta_qbit: QBit, be: BlockEncodedLinbladianState,): block_encode_linbladian(hamiltonian, beta, w0, be) # the jump operators don't use block-encoding, hence only the boltzmann qbit will # flag an accepted jump control( be.block.boltzmann_weight == 0, lambda: RY(asin(2 * sqrt(delta)), delta_qbit) ) control( delta_qbit == 0, lambda: invert(lambda: block_encode_linbladian(hamiltonian, beta, w0, be)), ) reset_var(be.block) reset_var(delta_qbit)
We choose parameters for the energy evaluation in the operator Fourier transform, a small enough delta timestep and number of repetitions.Note that as we make DELTA small enough, the approximation of evolution will be better. As the MAX_REPETIOTIONS * DELTA (total simulation time) is larger, the state is better mixed and we get closer to the approximated detailed balance state.
As FT_WINDOW_SIZE is larger, the approximated detailed balance state gets closer to the desired detailed balance state.Here we choose parameters that will allow reasonable simulation time on a quantum simulator.Note that as the algorithm includes mid-circuit measurements, the runtime time of state-vector simulators scales linearly with the number of shots. If a high number of shots is required, it might be beneficial to use the density matrix simulator instead.
DELTA = 0.1NUM_REPETITIONS = 50FT_WINDOW_SIZE = 4W0, T0 = get_fourier_parameters(FT_WINDOW_SIZE, MAX_ENERGY_SHIFT)class LinbladianBlock(QStruct): # naming corresponds to the paper p.23 jump: QNum[ JUMP_VAR_SIZE, UNSIGNED, 0 ] # jump operators label for the block encoding bohr_freq: QNum[ FT_WINDOW_SIZE, SIGNED, 0 ] # frequency register for the Operator Fourier Transform boltzmann_weight: QBit # ancilla for storing the Bohr-frequency dependent Boltzmann weights in the amplitudesclass BlockEncodedLinbladianState(QStruct): block: LinbladianBlock state: QArray[QBit, SYSTEM_SIZE] # holds the state of the system
The initial state can be in general any state. As the initial state is closer to the thermal state, the mixing time should be smaller. In case there is more than one minima, it might be beneficial to start with the maximally mixed state.
@qfuncdef main(be: Output[BlockEncodedLinbladianState]): delta_qbit = QBit() allocate(be) allocate(delta_qbit) # start with the |00> state repeat( NUM_REPETITIONS, lambda i: delta_step(HAMILTONIAN, BETA, W0, DELTA, delta_qbit, be), ) drop(delta_qbit)execution_preferences = ExecutionPreferences( num_shots=1000, # backend_preferences=ClassiqBackendPreferences(backend_name="simulator_density_matrix"), # slow, use when number of shots is very high backend_preferences=ClassiqBackendPreferences(backend_name="simulator"),)qmod = create_model( main, preferences=Preferences(optimization_level=0), execution_preferences=execution_preferences,)qprog = synthesize(qmod)print("synthesized")show(qprog)
Output:
synthesized Quantum program link: https://platform.classiq.io/circuit/32pWSnNqJ7rFHg9MSt4FlcqpWzH
res = execute(qprog).get_sample_result()res.parsed_counts
Lastly, we verify the results against the expected Gibbs distribution:
samples = dict( (int("".join(map(str, reversed(s.state["be"].state))), 2), s.shots) for s in res.parsed_counts)measured_probs = np.array([v for k, v in sorted(samples.items())])measured_probs = measured_probs / sum(measured_probs)measured_probs
Output:
array([0.177, 0.248, 0.353, 0.222])
eigs = [2, 0, -2, 0]expected_probs = [np.exp(-eig * BETA) for eig in eigs]expected_probs /= sum(expected_probs)expected_probs
Exact detailed balance: astonishingly, in a followup paper [10] the authors improved the algorithm to reach exact detailed balance instead of approximated, still with finite time hamiltonian simulation.
They use Linbladian with a coherent term as well.The algorithm runtime then scales as O(β⋅tmix(β)).
**Block Encoding vs.
Sampling Jumps**: it is possible and equivalent to classicaly sample jump operator on each iteration of the algorithm, instead of block encoding all the jumps as we did in the implementation.
Coherent version: the paper also presents a purified version of the algorithm, that improves quadratically the run time.