The Generalized Twirling Approximation: From Linear Algebra to Quantum Error Correction

Abstract

The Generalized Twirling Approximation (GTA) provides a rigorous mathematical framework for efficiently simulating quantum error correction circuits with non-Pauli errors, particularly qubit leakage. This article presents GTA from first principles using pure linear algebra, establishes its connection to quantum mechanics through the Choi-Jamiołkowski isomorphism, and demonstrates its practical implementation in the Pauli+ simulator used by Google Quantum AI.

  1. Introduction: The Simulation Problem

Quantum error correction (QEC) experiments involve hundreds of qubits and thousands of gates. Exact quantum simulation scales exponentially with system size, yet physical error rates (~10⁻³ per gate) necessitate statistical sampling with many shots. The Gottesman-Knill theorem enables efficient classical simulation of Clifford circuits with Pauli errors, but realistic devices exhibit:

The Generalized Twirling Approximation bridges this gap by mapping arbitrary quantum channels to Generalized Pauli Channels (GPCs)—classical probability distributions that preserve exact measurement statistics for stabilizer codes while enabling polynomial-time simulation.

  1. Linear Algebra Foundations

2.1 Graded Tensor Product Spaces

Let \(V_1, \ldots, V_n\) be finite-dimensional vector spaces over \(\mathbb{C}\) with direct sum decompositions:

\[V_j = \bigoplus_{\alpha \in A} V_{j,\alpha}\]

where \(A = \{c, 2, 3\}\) for transmon qubits (computational, leaked to |2⟩, leaked to |3⟩). The tensor product inherits a grading:

\[V = \bigotimes_{j=1}^n V_j = \bigoplus_{\bar{\alpha} \in A^n} V_{\bar{\alpha}}, \quad V_{\bar{\alpha}} = \bigotimes_{j=1}^n V_{j,\alpha_j}\]

Canonical projections and injections:

These satisfy \(\pi_{\bar{\alpha}} \circ \iota_{\bar{\beta}} = \delta_{\bar{\alpha}\bar{\beta}} \cdot \mathrm{id}_{V_{\bar{\alpha}}}\) and \(\sum_{\bar{\alpha}} \iota_{\bar{\alpha}} \circ \pi_{\bar{\alpha}} = \mathrm{id}_V\).

2.2 The Core Decomposition Theorem

Theorem (Equation 15): For any linear map \(K_{proj}: V_{\bar{\imath}} \to V_{\bar{f}}\) between graded components, there exists a canonical decomposition:

\[K_{proj} = \sum_{\bar{u} \in \mathcal{B}_U} \sum_{\bar{d} \in \mathcal{B}_D} \Phi_R^{\bar{u},\bar{d}} \otimes \Psi_U^{\bar{u}} \otimes \Psi_D^{\bar{d}} \otimes \Psi_L\]

where the index sets partition \(\{1,\ldots,n\} = R \sqcup U \sqcup D \sqcup L\):

SetCondition\(\Phi\) type
\(R\)\(i_j = c, f_j = c\)\(\Phi_R^{\bar{u},\bar{d}} \in \mathrm{End}(V_R)\), arbitrary linear operator
\(U\)\(i_j = c, f_j \in \{2,3\}\)\(\Psi_U^{\bar{u}} = \| \bar{f}[U]\rangle\langle\bar{u} \| \), rank-1 (bra-ket)
\(D\)\(i_j \in \{2,3\}, f_j = c\)\(\Psi_D^{\bar{d}} = \| \bar{d}\rangle\langle\bar{\imath}[D] \| \), rank-1 (bra-ket)
\(L\)\(i_j \in \{2,3\}, f_j \in \{2,3\}\)\(\Psi_L = \| \bar{f}[L]\rangle\langle\bar{\imath}[L] \| \), fixed isomorphism

Proof sketch: Insert resolutions of identity on \(U\) (input) and \(D\) (output) computational subspaces:

\[\mathbb{1}_U = \sum_{\bar{u} \in \{0,1\}^{|U|}} |\bar{u}\rangle\langle\bar{u}|, \quad \mathbb{1}_D = \sum_{\bar{d} \in \{0,1\}^{|D|}} |\bar{d}\rangle\langle\bar{d}|\]

Then \(K_{proj} = (\pi_{\bar{f}} \circ K \circ \iota_{\bar{\imath}}) \cdot \mathbb{1}_U \cdot \mathbb{1}_D\) expands to the tensor product form. ∎

2.3 The Twirling Operation

For the \(R\)-block, define the Pauli twirling superoperator:

\[\mathcal{T}: \mathrm{End}(V_R) \to \mathrm{End}(V_R), \quad \mathcal{T}(M) = \frac{1}{|P_R|} \sum_{P \in P_R} P^\dagger M P M^\dagger P\]

where \(P_R\) is the Pauli group on \(|R|\) qubits. This projects to the Pauli basis:

\[\mathcal{T}(M) = \sum_{\bar{\mu} \in \{0,1,2,3\}^{|R|}} p_{\bar{\mu}}(M) \cdot \sigma_{\bar{\mu}}\]

with coefficients \(p_{\bar{\mu}}(M) = \frac{1}{4^{|R|}} \sum_{P} \mathrm{Tr}[\sigma_{\bar{\mu}} P^\dagger M P M^\dagger]\).

Key property: \(\mathcal{T}\) is a projection (\(\mathcal{T}^2 = \mathcal{T}\)) onto the subspace of Pauli-diagonal channels.

2.4 reinstall back to eq15

After twirling the \(R\)-block, the Generalized Pauli Channel (GPC) is constructed by substituting the Pauli-diagonal form back into Equation 15.

Twirled \(R\)-block:

\[\mathcal{T}(\Phi_R^{\bar{u},\bar{d}}) = \sum_{\bar{\mu} \in \{0,1,2,3\}^{|R|}} p_{\bar{\mu}}^{\bar{u},\bar{d}} \cdot \sigma_{\bar{\mu}}\]

where \(p_{\bar{\mu}}^{\bar{u},\bar{d}} := p_{\bar{\mu}}(\Phi_R^{\bar{u},\bar{d}})\) are the Pauli probabilities for this \((\bar{u},\bar{d})\) pair.

Reconstructed channel:

\[\tilde{K}_{proj} = \sum_{\bar{u},\bar{d}} \mathcal{T}(\Phi_R^{\bar{u},\bar{d}}) \otimes \Psi_U^{\bar{u}} \otimes \Psi_D^{\bar{d}} \otimes \Psi_L\] \[= \sum_{\bar{u},\bar{d}} \sum_{\bar{\mu}} p_{\bar{\mu}}^{\bar{u},\bar{d}} \cdot \sigma_{\bar{\mu}} \otimes \Psi_U^{\bar{u}} \otimes \Psi_D^{\bar{d}} \otimes \Psi_L\]

The GPC is a classical probability distribution over:

Sampling view: To apply \(\tilde{K}_{proj}\):

  1. Sample \((\bar{u},\bar{d})\) from marginal distribution \(p(\bar{u},\bar{d}) = \sum_{\bar{\mu}} p_{\bar{\mu}}^{\bar{u},\bar{d}}\)

  2. Sample Pauli \(\sigma_{\bar{\mu}}\) from conditional \(p(\bar{\mu}|\bar{u},\bar{d}) = p_{\bar{\mu}}^{\bar{u},\bar{d}}/p(\bar{u},\bar{d})\)

  3. Apply: \(\sigma_{\bar{\mu}}\) to \(R\)-qubits, update leakage labels \(U \to \bar{f}[U]\), \(D \to\) random computational state, \(L \to \bar{f}[L]\)

  1. Connection to Quantum Mechanics

3.1 The Choi-Jamiołkowski Isomorphism

The abstract construction gains physical meaning through the Choi state. For a quantum channel \(\mathcal{E}: \mathcal{L}(\mathcal{H}) \to \mathcal{L}(\mathcal{H})\) with Kraus operators \(\{K_j\}\):

\[|\Phi_{\mathcal{E}}\rangle = (\mathcal{E} \otimes \mathrm{id})(|\Omega\rangle\langle\Omega|) = \sum_j |K_j\rangle\rangle\langle\langle K_j|\]

where \(|\Omega\rangle = \frac{1}{\sqrt{d}}\sum_i |i\rangle \otimes |i\rangle\) is the maximally entangled state and \(|K_j\rangle\rangle = (K_j \otimes I)|\Omega\rangle\) is the vectorized operator.

The GTA Steps 1-4 correspond to measuring this Choi state in a product basis:

3.2 Why Twirling Preserves Statistics

Theorem: For any stabilizer state \(|\psi\rangle\) and stabilizer measurement \(M\), the GPC produces identical statistics to the original channel.

Proof: Stabilizer measurements project to computational basis. The twirling \(\mathcal{T}\) preserves all diagonal elements (populations) of the Choi state in this basis. Off-diagonal elements (coherences) are destroyed by measurement anyway, making \(\mathcal{T}\) lossless for this observable algebra. ∎

3.3 Physical Interpretation of the Four Cases

CaseQuantum ProcessClassical Analog
\(R\) (\(c \to c\))Pauli error in computational subspaceBit/phase flip
\(U\) (\(c \to 2/3\))Leakage to non-computational stateAbsorption
\(D\) (\(2/3 \to c\))Return with lost coherenceRe-emission with thermalization
\(L\) (\(2/3 \to 2/3\))Persistent leakageMetastable state

The tensor product structure in Equation 15 reflects spatial independence: errors on different qubits factorize.

  1. The Pauli+ Simulator

4.1 Data Structures

Pauli+ maintains a hybrid classical-quantum state:

class PauliPlusState:
    leakage_labels: Array[{c, 2, 3}, n]      # Classical
    stabilizer_tableau: BinaryMatrix[n, 2n]  # Quantum (Clifford)

The tableau represents stabilizer generators \(\langle G_1, \ldots, G_n \rangle\) via symplectic notation: each \(G_i = (-1)^{a_i} X^{\vec{x}_i} Z^{\vec{z}_i}\) stored as \((\vec{x}_i, \vec{z}_i) \in \mathbb{F}_2^{2n}\).

4.2 Pre-computation: Building the GPC

For each gate/idle operation, Equation 15 is evaluated symbolically:

def build_gpc(kraus_operators, n_qubits):
    gpc = {}
    for i_bar in product([c,2,3], repeat=n_qubits):
        for f_bar in product([c,2,3], repeat=n_qubits):
            R, U, D, L = partition(i_bar, f_bar)
            
            pauli_probs = defaultdict(float)
            total_prob = 0
            
            for K in kraus_operators:
                for u_vec in product([0,1], repeat=len(U)):
                    for d_vec in product([0,1], repeat=len(D)):
                        # Equation 15: extract R-block
                        M = partial_evaluate(K, i_bar, f_bar, u_vec, d_vec)
                        
                        # Twirl to Pauli channel
                        for mu, p_mu in twirl_to_pauli(M).items():
                            weight = (1/2**len(U)) * p_mu * norm(M)**2
                            pauli_probs[mu] += weight
                            total_prob += weight
            
            # Normalize
            gpc[(i_bar, f_bar)] = {
                'P_transition': total_prob,
                'P_pauli': {mu: p/total_prob for mu, p in pauli_probs.items()},
                'sets': {'R':R, 'U':U, 'D':D, 'L':L}
            }
    return gpc

4.3 Runtime: Sampling from the GPC

def apply_operation(state, gpc_entry):
    i_bar = state.leakage_labels
    
    # Sample final leakage state
    candidates = [f for (i,f) in gpc_entry if i == i_bar]
    probs = [gpc_entry[(i_bar, f)]['P_transition'] for f in candidates]
    f_bar = sample(candidates, probs)
    
    # Update leakage labels (classical)
    state.leakage_labels = f_bar
    
    # Handle R: apply sampled Pauli to stabilizers
    R = gpc_entry[(i_bar, f_bar)]['sets']['R']
    if R:
        pauli = sample(gpc_entry[(i_bar, f_bar)]['P_pauli'])
        apply_pauli_to_tableau(state, R, pauli)
    
    # Handle D: maximally depolarizing (random reset)
    D = gpc_entry[(i_bar, f_bar)]['sets']['D']
    for q in D:
        add_random_generator(state, q)
    
    # U and L: already handled by leakage label update

4.4 Complexity Analysis

OperationCostvs. Exact Quantum
State storage\(O(n^2)\) bits\(O(2^n)\) complex numbers
Gate application\(O(n^2)\)\(O(2^{3n})\)
Sampling from GPC\(O(1)\) (pre-computed)N/A
Memory (GPC tables)\(O(4^n \cdot 4^{\|R\|_{\max}})\)

For surface code with \(n=49\) and \(|R|_{\max} \approx 10\) (typical), Pauli+ runs in seconds vs. years for exact simulation.

  1. Validation and Results

Google Quantum AI validated Pauli+ against exact quantum trajectories (kraus_sim) for distance-3 surface codes. Key findings:

MetricAgreementImplication
Logical error probability~1% relative errorGTA is accurate
Detection event fractions\(R^2 > 0.95\)Leakage model correct
\(p_{ij}\) correlationsWithin 2σSpatial correlations captured

The small discrepancies arise from:

  1. Generalizations and Open Problems

6.1 Beyond Pauli: General Clifford Twirling

For codes with non-Pauli stabilizers (e.g., color codes), replace \(P_R\) with the full Clifford group. The decomposition still holds, but twirling cost increases from \(O(4^{|R|})\) to \(O(|R|!)\).

6.2 Continuous Leakage Manifolds

For systems with infinite-dimensional leakage (e.g., cavity modes), discretize \(\mathcal{H}_{\text{leak}}\) or use Gaussian state approximations.

6.3 Correlated Noise

Equation 15 assumes product structure. For spatially correlated noise, use cluster expansion or tensor network methods on top of GPC.

  1. Conclusion

The Generalized Twirling Approximation demonstrates how abstract linear algebra (graded tensor products, block decompositions) connects to practical quantum computing (efficient error correction simulation). By isolating the "quantum" part (R-set Pauli errors) from "classical" parts (U, D, L transitions), GTA achieves exponential speedup without sacrificing accuracy for stabilizer codes.

The key insight of Equation 15—that any channel block decomposes into tensor products with controlled ranks—provides a template for similar approximations across quantum information theory.


References

  1. Google Quantum AI, "Suppressing quantum errors by scaling a surface code logical qubit," Nature (2022)

  2. Section XIV.B.3 of supplementary information (arXiv:2207.06431)

  3. Gottesman, "The Heisenberg representation of quantum computers," Group22 (1999)

  4. Choi, "Completely positive linear maps on complex matrices," Linear Algebra Appl. (1975)

Suppendix

Theorem: Tensor Product Inherits Grading

Given: Vector spaces \(V_1, \ldots, V_n\) over \(\mathbb{C}\) with direct sum decompositions:

\[V_j = \bigoplus_{\alpha \in A} V_{j,\alpha}\]

where \(A\) is a finite index set (e.g., \(\{c, 2, 3\}\) for transmon qubits).

To Prove: The tensor product \(V = \bigotimes_{j=1}^n V_j\) has a canonical grading:

\[V = \bigoplus_{\bar{\alpha} \in A^n} V_{\bar{\alpha}}, \quad \text{where } V_{\bar{\alpha}} = \bigotimes_{j=1}^n V_{j,\alpha_j}\]

Proof

Step 1: Establish the Isomorphism for Each Factor

For each \(V_j\), the direct sum decomposition gives a canonical isomorphism:

\[\phi_j: V_j \xrightarrow{\sim} \bigoplus_{\alpha \in A} V_{j,\alpha}\]

with canonical projections \(\pi_{j,\alpha}: V_j \to V_{j,\alpha}\) and injections \(\iota_{j,\alpha}: V_{j,\alpha} \hookrightarrow V_j\) satisfying:

Step 2: Apply Tensor Product Distributivity

The tensor product distributes over direct sums (this is a defining property of \(\otimes\) in the category of vector spaces):

\[\bigotimes_{j=1}^n \left(\bigoplus_{\alpha_j \in A} V_{j,\alpha_j}\right) \cong \bigoplus_{(\alpha_1, \ldots, \alpha_n) \in A^n} \left(\bigotimes_{j=1}^n V_{j,\alpha_j}\right)\]

Explicit construction: The isomorphism is induced by the multilinear map:

\[\Phi: \prod_{j=1}^n V_j \to \bigoplus_{\bar{\alpha} \in A^n} V_{\bar{\alpha}}\]

defined on pure tensors by:

\[\Phi(v_1, \ldots, v_n) = \sum_{\bar{\alpha} \in A^n} \pi_{1,\alpha_1}(v_1) \otimes \cdots \otimes \pi_{n,\alpha_n}(v_n)\]

Step 3: Verify the Grading Structure

Define \(V_{\bar{\alpha}} = \bigotimes_{j=1}^n V_{j,\alpha_j}\). We must verify:

(a) Direct sum decomposition:

\[V = \bigoplus_{\bar{\alpha} \in A^n} V_{\bar{\alpha}}\]

This follows immediately from the distributivity isomorphism. Each element \(v \in V\) has a unique decomposition:

\[v = \sum_{\bar{\alpha} \in A^n} v_{\bar{\alpha}}, \quad v_{\bar{\alpha}} \in V_{\bar{\alpha}}\]

(b) Canonical projections and injections:

Define \(\pi_{\bar{\alpha}}: V \to V_{\bar{\alpha}}\) and \(\iota_{\bar{\alpha}}: V_{\bar{\alpha}} \hookrightarrow V\) by:

\[\pi_{\bar{\alpha}} = \bigotimes_{j=1}^n \pi_{j,\alpha_j}, \quad \iota_{\bar{\alpha}} = \bigotimes_{j=1}^n \iota_{j,\alpha_j}\]

Step 4: Verify the Characteristic Identities

Identity 1 (Orthogonality): For \(\bar{\alpha}, \bar{\beta} \in A^n\):

\[\pi_{\bar{\alpha}} \circ \iota_{\bar{\beta}} = \left(\bigotimes_{j=1}^n \pi_{j,\alpha_j}\right) \circ \left(\bigotimes_{j=1}^n \iota_{j,\beta_j}\right) = \bigotimes_{j=1}^n (\pi_{j,\alpha_j} \circ \iota_{j,\beta_j})\] \[= \bigotimes_{j=1}^n (\delta_{\alpha_j \beta_j} \cdot \mathrm{id}_{V_{j,\alpha_j}}) = \delta_{\bar{\alpha}\bar{\beta}} \cdot \mathrm{id}_{V_{\bar{\alpha}}}\]

where \(\delta_{\bar{\alpha}\bar{\beta}} = \prod_{j=1}^n \delta_{\alpha_j \beta_j}\).

Identity 2 (Completeness):

\[\sum_{\bar{\alpha} \in A^n} \iota_{\bar{\alpha}} \circ \pi_{\bar{\alpha}} = \sum_{\bar{\alpha} \in A^n} \bigotimes_{j=1}^n (\iota_{j,\alpha_j} \circ \pi_{j,\alpha_j})\] \[= \bigotimes_{j=1}^n \left(\sum_{\alpha_j \in A} \iota_{j,\alpha_j} \circ \pi_{j,\alpha_j}\right) = \bigotimes_{j=1}^n \mathrm{id}_{V_j} = \mathrm{id}_V\]

The key step uses the fact that tensor product is \(\mathbb{C}\)-multilinear: sums and compositions factorize across tensor products.

The Core Identity Explained: I

\[\pi_{j,\alpha} \circ \iota_{j,\beta} = \delta_{\alpha\beta} \cdot \mathrm{id}_{V_{j,\alpha}}\]

This involves two maps from the direct sum structure on \(V_j = \bigoplus_{\gamma \in A} V_{j,\gamma}\).


Definitions

Canonical Injections (ι)

For each component \(V_{j,\beta}\) in the direct sum, there's an injection:

\[\iota_{j,\beta}: V_{j,\beta} \hookrightarrow V_j = \bigoplus_{\gamma \in A} V_{j,\gamma}\]

Action: Takes \(v \in V_{j,\beta}\) and embeds it into the direct sum as a tuple with \(v\) in position \(\beta\) and \(0\) elsewhere:

\[\iota_{j,\beta}(v) = (0, \ldots, 0, \underbrace{v}_{\text{position } \beta}, 0, \ldots, 0)\]
Canonical Projections (π)

For each component, there's a projection:

\[\pi_{j,\alpha}: V_j = \bigoplus_{\gamma \in A} V_{j,\gamma} \to V_{j,\alpha}\]

Action: Takes a tuple \((v_\gamma)_{\gamma \in A}\) and extracts the \(\alpha\)-component:

\[\pi_{j,\alpha}((v_\gamma)_{\gamma \in A}) = v_\alpha\]

Case Analysis

Case 1: \(\alpha = \beta\) (Same Index)
\[\pi_{j,\alpha} \circ \iota_{j,\alpha}: V_{j,\alpha} \to V_{j,\alpha}\]

Take any \(v \in V_{j,\alpha}\):

  1. \(\iota_{j,\alpha}(v) = (0, \ldots, 0, v, 0, \ldots, 0)\) — embeds \(v\) at position \(\alpha\)

  2. \(\pi_{j,\alpha}((0, \ldots, 0, v, 0, \ldots, 0)) = v\) — extracts component at position \(\alpha\)

Result: \((\pi_{j,\alpha} \circ \iota_{j,\alpha})(v) = v\)

Therefore: \(\pi_{j,\alpha} \circ \iota_{j,\alpha} = \mathrm{id}_{V_{j,\alpha}}\)

Case 2: \(\alpha \neq \beta\) (Different Indices)
\[\pi_{j,\alpha} \circ \iota_{j,\beta}: V_{j,\beta} \to V_{j,\alpha}\]

Take any \(v \in V_{j,\beta}\):

  1. \(\iota_{j,\beta}(v) = (0, \ldots, 0, \underbrace{v}_{\text{position } \beta}, 0, \ldots, 0)\) — embeds \(v\) at position \(\beta\)

  2. \(\pi_{j,\alpha}((0, \ldots, 0, v, 0, \ldots, 0)) = 0\) — extracts component at position \(\alpha\), which is \(0\) since \(\alpha \neq \beta\)

Result: \((\pi_{j,\alpha} \circ \iota_{j,\beta})(v) = 0\) for all \(v\)

Therefore: \(\pi_{j,\alpha} \circ \iota_{j,\beta} = 0\) (the zero map)


The Kronecker Delta Notation

\[\delta_{\alpha\beta} \cdot \mathrm{id}_{V_{j,\alpha}} = \begin{cases} \mathrm{id}_{V_{j,\alpha}} & \text{if } \alpha = \beta \\ 0 & \text{if } \alpha \neq \beta \end{cases}\]

When \(\alpha \neq \beta\), the zero map goes from \(V_{j,\beta} \to V_{j,\alpha}\). Strictly speaking, these are different spaces, so "zero" means the zero linear map between them.

In the categorical/physics notation, this is compactly written with the understanding that when \(\delta_{\alpha\beta} = 0\), the identity doesn't exist (it's the zero map between different spaces).


Summary Diagram

V_{j,α} ──ι_{j,α}──→ V_j ──π_{j,α}──→ V_{j,α}   ⇒  id_{V_{j,α}}
     ↓                    ↓
   (v) ↦ (0,...,v,...,0) ↦ v

V_{j,β} ──ι_{j,β}──→ V_j ──π_{j,α}──→ V_{j,α}   ⇒  0  (when α≠β)
     ↓                    ↓
   (v) ↦ (0,...,v,...,0) ↦ 0   (extracts α-position, which is 0)

The Core Identity Explained: II

\[\sum_{\alpha \in A} \iota_{j,\alpha} \circ \pi_{j,\alpha} = \mathrm{id}_{V_j}\]

where \(V_j = \bigoplus_{\alpha \in A} V_{j,\alpha}\) is the direct sum.


Breaking Down the Composition

For a single \(\alpha \in A\), the map \(\iota_{j,\alpha} \circ \pi_{j,\alpha}\) acts as:

\[\iota_{j,\alpha} \circ \pi_{j,\alpha}: V_j \to V_j\]

It projects onto component \(V_{j,\alpha}\) then injects back into \(V_j\). This is a projection operator onto the subspace \(V_{j,\alpha} \subset V_j\).


What It Does to an Element

Take any \(v \in V_j\). By definition of direct sum, \(v\) has a unique representation as a tuple:

\[v = (v_\alpha)_{\alpha \in A} = (v_{\alpha_1}, v_{\alpha_2}, \ldots, v_{\alpha_{|A|}})\]

where each \(v_\alpha \in V_{j,\alpha}\).

Now apply \(\iota_{j,\alpha} \circ \pi_{j,\alpha}\) to \(v\):

  1. Project: \(\pi_{j,\alpha}(v) = v_\alpha\) (extracts the \(\alpha\)-component)

  2. Inject: \(\iota_{j,\alpha}(v_\alpha) = (0, \ldots, 0, v_\alpha, 0, \ldots, 0)\) (embeds back)

Result: \((\iota_{j,\alpha} \circ \pi_{j,\alpha})(v) = (0, \ldots, 0, v_\alpha, 0, \ldots, 0)\)


The Sum Reconstructs the Original

Now sum over all \(\alpha \in A\):

\[\sum_{\alpha \in A} (\iota_{j,\alpha} \circ \pi_{j,\alpha})(v) = \sum_{\alpha \in A} (0, \ldots, 0, v_\alpha, 0, \ldots, 0)\] \[= (v_{\alpha_1}, v_{\alpha_2}, \ldots, v_{\alpha_{|A|}}) = v\]

Each component \(v_\alpha\) appears exactly once in its correct position. The sum reconstructs the original tuple.


This article synthesizes theoretical foundations from linear algebra, quantum information, and practical implementation details from the Google Quantum AI experiment.

CC BY-SA 4.0 Septimia Zenobia. Last modified: April 09, 2026. Website built with Franklin.jl and the Julia programming language.