PEPS (2D Tensor Networks)#
Projected Entangled Pair States for 2D lattice quantum simulations.
Overview#
The peps module implements Projected Entangled Pair States for simulating quantum systems on 2D lattices. PEPS is the natural generalization of Matrix Product States (MPS) from 1D chains to 2D grids, enabling efficient simulation of:
2D cluster states and graph states
Shallow quantum supremacy circuits
2D spin systems (e.g., 2D Ising, Heisenberg)
Topological quantum codes (surface codes)
Small quantum processor patches (4×4, 5×5 grids)
Key Features#
2D lattice structure: Natural representation for grid-based systems
Moderate entanglement: Handles area-law entangled states efficiently
Boundary MPS contraction: Efficient approximate contraction algorithm
GPU acceleration: CUDA-optimized tensor operations
Small patches: Optimized for 4×4 to 6×6 grids (16-36 qubits)
PEPS Network Structure#
| | | |
--•---•---•---•--
| | | |
--•---•---•---•--
| | | |
--•---•---•---•--
| | | |
Each node • is a rank-5 tensor with indices:
- where:
u, l, r, d: virtual indices (bond dimension χ) connecting to neighboring tensors
s: physical index (dimension d=2 for qubits)
Storage: O(rows × cols × χ⁴ × d) vs. O(2^(rows×cols)) for statevector
Mathematical Background#
PEPS Representation#
A 2D quantum state on an m×n lattice is represented as:
where tensor contractions are performed over virtual indices connecting neighbors.
Key property: Area-law entanglement → small bond dimension χ
Contraction Problem#
Computing \(\langle\psi|\psi\rangle\) requires contracting the 2D tensor network. This is #P-hard in general, but efficient approximations exist:
Boundary MPS method: Contract rows sequentially into 1D boundary MPS
Corner transfer matrix: Contract corners first, then edges
Simple update: Approximate gate application with local updates
Complexity: O(m × n × χ⁵) for boundary MPS (χ is boundary bond dimension)
Area Law Entanglement#
For 2D systems obeying area law:
where S(A) is entanglement entropy of region A.
Consequence: Bond dimension χ grows slowly with system size, making PEPS efficient for physical systems.
Enums#
ContractionStrategy#
- class atlas_q.peps.ContractionStrategy[source]#
Enumeration of PEPS contraction algorithms.
- BOUNDARY_MPS#
Contract rows into MPS boundary sequentially (default). Best balance of accuracy and speed.
Complexity: O(m × n × χ⁵)
- COLUMN_BY_COLUMN#
Contract columns sequentially instead of rows.
Use case: Prefer for wide, shallow grids
- SIMPLE_UPDATE#
Iterative tensor updates for time evolution.
Use case: Time evolution, ground state search
- FULL_UPDATE#
Exact contraction using full environment tensors.
Warning: Exponentially expensive, only for very small patches (≤3×3)
Configuration#
PEPSConfig#
- class atlas_q.peps.PEPSConfig(rows, cols, physical_dim=2, bond_dim=4, contraction_strategy=ContractionStrategy.BOUNDARY_MPS, boundary_chi=32, device='cuda')[source]#
Configuration class for PEPS simulation.
Constructor:
from atlas_q.peps import PEPSConfig, ContractionStrategy config = PEPSConfig( rows=4, cols=4, physical_dim=2, # Qubits bond_dim=4, # PEPS bond dimension χ contraction_strategy=ContractionStrategy.BOUNDARY_MPS, boundary_chi=32, # Boundary MPS bond dimension device='cuda' )
- Parameters:
rows(int): Number of rows in 2D gridcols(int): Number of columns in 2D gridphysical_dim(int): Physical dimension per site (default: 2 for qubits)bond_dim(int): Virtual bond dimension χ (default: 4)contraction_strategy(ContractionStrategy): Contraction algorithmboundary_chi(int): Bond dimension for boundary MPS (default: 32)device(str): ‘cuda’ or ‘cpu’
Memory estimate:
# PEPS tensors: rows × cols × χ⁴ × d # Example: 4×4 grid, χ=4, d=2 Memory ≈ 16 × 4⁴ × 2 × 8 bytes = 64 KB # Boundary MPS: cols × χ_boundary² # Example: 4 cols, χ_boundary=32 Boundary ≈ 4 × 32² × 8 bytes = 32 KB Total ≈ 96 KB (tiny!)
Classes#
PEPSTensor#
- class atlas_q.peps.PEPSTensor(row, col, tensor)[source]#
Single PEPS tensor at position (row, col) in the lattice.
Constructor:
import torch from atlas_q.peps import PEPSTensor # Create tensor at position (0, 0) # Shape: [χ_up, χ_left, d, χ_right, χ_down] tensor = torch.randn(1, 1, 2, 4, 4, dtype=torch.complex64, device='cuda') peps_tensor = PEPSTensor(row=0, col=0, tensor=tensor)
- Parameters:
row(int): Row indexcol(int): Column indextensor(torch.Tensor): Rank-5 tensor with shape [χ_up, χ_left, d, χ_right, χ_down]
Attributes:
- row#
Row position (int)
- col#
Column position (int)
- tensor#
Rank-5 tensor data (torch.Tensor)
PEPS#
- class atlas_q.peps.PEPS(config)[source]#
Projected Entangled Pair State tensor network for 2D lattices.
Manages a 2D grid of rank-5 tensors with methods for gate application, contraction, and expectation value computation.
Constructor:
from atlas_q.peps import PEPS, PEPSConfig config = PEPSConfig(rows=5, cols=5, bond_dim=4, device='cuda') peps = PEPS(config)
- Parameters:
config(PEPSConfig): PEPS configuration
Attributes:
- config#
PEPS configuration (PEPSConfig)
- rows#
Number of rows (int)
- cols#
Number of columns (int)
- tensors#
Dictionary mapping (row, col) → PEPSTensor
Methods:
- apply_single_qubit_gate(gate, row, col)#
Apply single-qubit unitary gate at position (row, col).
- Parameters:
gate (torch.Tensor) – 2×2 unitary matrix
row (int) – Row index (0 to rows-1)
col (int) – Column index (0 to cols-1)
Complexity: O(χ⁴)
Example:
import torch from atlas_q.peps import PEPS, PEPSConfig peps = PEPS(PEPSConfig(rows=4, cols=4, device='cuda')) # Apply Hadamard to all qubits H = torch.tensor([[1, 1], [1, -1]], dtype=torch.complex64, device='cuda') / (2**0.5) for i in range(4): for j in range(4): peps.apply_single_qubit_gate(H, i, j)
- apply_two_qubit_gate(gate, pos1, pos2)#
Apply two-qubit gate between adjacent sites.
- Parameters:
gate (torch.Tensor) – 4×4 unitary matrix
pos1 (tuple) – First position (row1, col1)
pos2 (tuple) – Second position (row2, col2)
Requirements: pos1 and pos2 must be adjacent (horizontally or vertically)
Complexity: O(χ⁵) with SVD truncation
Example:
# Apply CZ gate between (0,0) and (0,1) CZ = torch.diag(torch.tensor([1, 1, 1, -1], dtype=torch.complex64, device='cuda')) peps.apply_two_qubit_gate(CZ, (0, 0), (0, 1)) # Apply CNOT vertically CNOT = torch.tensor([[1,0,0,0],[0,1,0,0],[0,0,0,1],[0,0,1,0]], dtype=torch.complex64, device='cuda') peps.apply_two_qubit_gate(CNOT, (1, 2), (2, 2))
- contract_boundary_mps(chi_max=None)#
Contract PEPS network using boundary MPS method.
Sequentially contracts rows into a boundary MPS, producing final norm or expectation value.
- Parameters:
chi_max (int) – Maximum boundary MPS bond dimension (default: from config)
- Returns:
Norm squared ⟨ψ|ψ⟩
- Return type:
Complexity: O(rows × cols × χ⁵ × boundary_chi²)
Example:
norm = peps.contract_boundary_mps(chi_max=64) print(f"||ψ||² = {norm.real:.10f}") # Should be ~1.0
- compute_expectation(operator, positions)#
Compute expectation value of multi-site operator.
- Parameters:
operator (torch.Tensor) – Operator matrix (2ᵏ × 2ᵏ for k sites)
positions (list) – List of (row, col) tuples
- Returns:
Expectation value ⟨ψ|O|ψ⟩
- Return type:
Example:
import torch # Single-site observable: ⟨Z_{2,3}⟩ Z = torch.tensor([[1, 0], [0, -1]], dtype=torch.complex64, device='cuda') exp_z = peps.compute_expectation(Z, [(2, 3)]) # Two-site observable: ⟨Z_{0,0} Z_{0,1}⟩ ZZ = torch.kron(Z, Z) exp_zz = peps.compute_expectation(ZZ, [(0, 0), (0, 1)])
- to_mps()#
Convert PEPS to 1D MPS by contracting rows.
- Returns:
Equivalent MPS representation (approximate)
- Return type:
Use case: Interface with 1D algorithms after 2D preparation
Example:
mps = peps.to_mps() print(f"MPS bond dimensions: {mps.bond_dimensions}")
- get_amplitude(bitstring)#
Compute amplitude for computational basis state.
- Parameters:
bitstring (str) – Bitstring in row-major order (e.g., ‘0101…’)
- Returns:
Amplitude ⟨bitstring|ψ⟩
- Return type:
Complexity: O(rows × cols × χ⁵)
Example:
# 4×4 grid (16 qubits) amp = peps.get_amplitude('0' * 16) # Amplitude of |0000...0⟩ print(f"Amplitude: {amp:.6f}")
Performance Characteristics#
Computational Complexity#
Operation |
Complexity |
|---|---|
Single-qubit gate |
O(χ⁴) |
Two-qubit gate |
O(χ⁵) |
Boundary MPS contract |
O(m×n×χ⁵×χ_b²) |
Expectation value |
O(m×n×χ⁵×χ_b²) |
where m, n are grid dimensions, χ is PEPS bond dim, χ_b is boundary MPS bond dim.
Scaling Limits#
GPU memory limits (A100 80GB):
# PEPS bond dimension χ
4×4 grid: χ ≤ 8 (tractable)
5×5 grid: χ ≤ 6 (tractable)
6×6 grid: χ ≤ 4 (tractable)
8×8 grid: χ ≤ 2 (limited)
# Beyond this: use distributed_mps or circuit cutting
Contraction time (NVIDIA A100):
4×4 grid, χ=4, χ_b=32: 0.5 sec
5×5 grid, χ=4, χ_b=32: 1.2 sec
6×6 grid, χ=4, χ_b=32: 3.5 sec
Examples#
2D Cluster State#
from atlas_q.peps import PEPS, PEPSConfig, ContractionStrategy
import torch
# Create 4×4 PEPS
config = PEPSConfig(
rows=4,
cols=4,
bond_dim=4,
contraction_strategy=ContractionStrategy.BOUNDARY_MPS,
boundary_chi=32,
device='cuda'
)
peps = PEPS(config)
# Step 1: Initialize all qubits in |+⟩
H = torch.tensor([[1, 1], [1, -1]], dtype=torch.complex64, device='cuda') / (2**0.5)
for i in range(4):
for j in range(4):
peps.apply_single_qubit_gate(H, i, j)
# Step 2: Apply CZ gates on all edges to create cluster state
CZ = torch.diag(torch.tensor([1, 1, 1, -1], dtype=torch.complex64, device='cuda'))
# Horizontal edges
for i in range(4):
for j in range(3):
peps.apply_two_qubit_gate(CZ, (i, j), (i, j+1))
# Vertical edges
for i in range(3):
for j in range(4):
peps.apply_two_qubit_gate(CZ, (i, j), (i+1, j))
# Contract and verify norm
norm = peps.contract_boundary_mps()
print(f"Cluster state norm: {norm.real:.6f}") # Should be ~1.0
2D Ising Model Ground State#
from atlas_q.peps import PEPS, PEPSConfig
import torch
# 5×5 2D Ising model: H = -J Σ_<i,j> Z_i Z_j
peps = PEPS(PEPSConfig(rows=5, cols=5, bond_dim=4, device='cuda'))
# Initialize in |0⟩^⊗25 (ground state for ferromagnetic J>0)
# (already initialized to |0⟩ by default)
# Compute energy expectation
Z = torch.tensor([[1, 0], [0, -1]], dtype=torch.complex64, device='cuda')
ZZ = torch.kron(Z, Z)
total_energy = 0.0
J = 1.0
# Horizontal bonds
for i in range(5):
for j in range(4):
energy = peps.compute_expectation(ZZ, [(i, j), (i, j+1)])
total_energy += -J * energy.real
# Vertical bonds
for i in range(4):
for j in range(5):
energy = peps.compute_expectation(ZZ, [(i, j), (i+1, j)])
total_energy += -J * energy.real
print(f"Ground state energy: {total_energy:.6f}")
Shallow Quantum Supremacy Circuit#
from atlas_q.peps import PEPS, PEPSConfig
import torch
import numpy as np
# 5×5 grid, shallow circuit (10 layers)
peps = PEPS(PEPSConfig(rows=5, cols=5, bond_dim=6, device='cuda'))
# Layer 1: Hadamards
H = torch.tensor([[1, 1], [1, -1]], dtype=torch.complex64, device='cuda') / (2**0.5)
for i in range(5):
for j in range(5):
peps.apply_single_qubit_gate(H, i, j)
# Layers 2-10: Random single-qubit gates + structured two-qubit gates
for layer in range(9):
# Random single-qubit unitaries
for i in range(5):
for j in range(5):
theta = np.random.rand() * 2 * np.pi
phi = np.random.rand() * 2 * np.pi
# U = RZ(phi) RX(theta)
U = torch.tensor([
[np.cos(theta/2), -1j*np.sin(theta/2)],
[-1j*np.sin(theta/2), np.cos(theta/2)]
], dtype=torch.complex64, device='cuda')
peps.apply_single_qubit_gate(U, i, j)
# Structured two-qubit gates (checkerboard pattern)
iSWAP = torch.tensor([
[1, 0, 0, 0],
[0, 0, 1j, 0],
[0, 1j, 0, 0],
[0, 0, 0, 1]
], dtype=torch.complex64, device='cuda')
offset = layer % 2
for i in range(5):
for j in range(offset, 5, 2):
if j+1 < 5:
peps.apply_two_qubit_gate(iSWAP, (i, j), (i, j+1))
# Compute amplitude of |00...0⟩ (supremacy benchmark)
amp = peps.get_amplitude('0' * 25)
prob = abs(amp)**2
print(f"P(|00...0⟩) = {prob:.10f}")
Converting PEPS to MPS#
from atlas_q.peps import PEPS, PEPSConfig
# Create 4×4 PEPS state
peps = PEPS(PEPSConfig(rows=4, cols=4, bond_dim=4, device='cuda'))
# ... prepare state ...
# Convert to 1D MPS (16 qubits)
mps = peps.to_mps()
# Now use 1D algorithms
from atlas_q.tdvp import TDVP
from atlas_q.mpo_ops import MPOBuilder
H = MPOBuilder.ising_hamiltonian(n_sites=16, J=1.0, h=0.5, device='cuda')
tdvp = TDVP(mps=mps, hamiltonian=H, dt=0.1, method='two_site')
energy = tdvp.run(n_steps=50)
Use Cases#
When to Use PEPS#
2D lattice systems: Natural 2D structure (surface codes, 2D spin models)
Shallow circuits: Quantum supremacy experiments with depth ≤20
Small patches: 4×4 to 6×6 grids (16-36 qubits)
Area-law states: Moderate entanglement satisfying area law
Cluster state preparation: 2D graph states for MBQC
When NOT to Use PEPS#
Large grids: >6×6 becomes intractable (use circuit cutting or distributed MPS)
Deep circuits: Depth >20 causes bond dimension explosion
Volume-law entanglement: Random circuits with extensive entanglement
1D systems: Use MPS instead (more efficient)
PEPS vs MPS#
Feature |
PEPS |
MPS |
|---|---|---|
Geometry |
2D lattice |
1D chain |
Entanglement capacity |
Area law |
Bounded |
Contraction |
#P-hard (approx.) |
Polynomial (exact) |
Max system size |
~6×6 (36 qubits) |
~100 qubits |
Use case |
2D physics, patches |
1D systems, general |
Limitations#
Current Implementation#
- This is a “light” PEPS implementation optimized for:
Small patches (4×4 to 6×6)
Shallow circuits (depth ≤ 20)
Proof-of-concept 2D algorithms
- Not suitable for:
Large-scale 2D simulations (use circuit cutting)
Deep circuits (use 1D MPS with optimized layout)
Production PEPS algorithms (iTEBD, CTMRG)
- For large-scale 2D simulations, see:
Circuit Cutting - Partition large 2D grids into patches
Distributed MPS - Distributed computation for >50 qubits
2D/Planar Circuits - Optimized 2D→1D circuit mapping
Cross-References#
See Also#
Tensor Networks - PEPS theory and area law
2D/Planar Circuits - 2D qubit layout and routing
Circuit Cutting - Splitting large 2D circuits
atlas_q.adaptive_mps - 1D MPS (more efficient for non-2D systems)
Advanced Features Tutorial - PEPS tutorial
References#
Key papers on PEPS:
Verstraete, F. & Cirac, J. I. (2004). “Renormalization algorithms for quantum many-body systems in two and higher dimensions.” arXiv:cond-mat/0407066
Orús, R. (2014). “A practical introduction to tensor networks: Matrix product states and projected entangled pair states.” Annals of Physics, 349, 117-158.
Eisert, J. et al. (2010). “Colloquium: Area laws for the entanglement entropy.” Reviews of Modern Physics, 82(1), 277.
Arute, F. et al. (2019). “Quantum supremacy using a programmable superconducting processor.” Nature, 574(7779), 505-510.