atlas_q.adaptive_mps#
Adaptive Matrix Product State for Moderate-to-High Entanglement
Extends MatrixProductStatePyTorch with: - Adaptive bond dimension by tolerance - Per-bond χ caps and global memory budget - Mixed precision (complex32/complex64) support - Two-site gate application with automatic SVD truncation - Comprehensive logging and diagnostics
Mathematical guarantees: - Local error control: ε_local² = Σ_{i>k} σ_i² ≤ ε_bond² - Global error bound: ε_global ≤ sqrt(Σ_b ε²_local,b) - Entropy: S_b = -Σ_i p_i log(p_i) where p_i = σ_i²/Σ_j σ_j²
Author: ATLAS-Q Contributors Date: October 2025 License: MIT
- class atlas_q.adaptive_mps.DTypePolicy(default=torch.complex64, promote_if_cond_gt=1000000.0)[source]#
Bases:
objectMixed precision policy configuration
- class atlas_q.adaptive_mps.AdaptiveMPS(num_qubits, bond_dim=8, *, eps_bond=1e-06, chi_max_per_bond=256, budget_global_mb=None, dtype_policy=DTypePolicy(default=torch.complex64, promote_if_cond_gt=1000000.0), device='cuda', dtype=None)[source]#
Bases:
MatrixProductStatePyTorchAdaptive MPS for moderate-to-high entanglement simulation
Key features: - Variable per-bond dimensions with adaptive truncation - Energy-based rank selection: keep k such that Σ_{i≤k} σ_i² ≥ (1-ε²) Σ_i σ_i² - Per-bond χ caps and global memory budget enforcement - Mixed precision with automatic promotion on numerical instability - Two-site gate application (TEBD-style) - Comprehensive statistics and error tracking
- Example:
>>> mps = AdaptiveMPS(16, bond_dim=8, eps_bond=1e-6, chi_max_per_bond=64) >>> H = torch.tensor([[1,1],[1,-1]], dtype=torch.complex64)/torch.sqrt(torch.tensor(2.0)) >>> for q in range(16): >>> mps.apply_single_qubit_gate(q, H) >>> CZ = torch.diag(torch.tensor([1,1,1,-1], dtype=torch.complex64)) >>> for i in range(0, 15, 2): >>> mps.apply_two_site_gate(i, CZ) >>> print(mps.stats_summary())
Methods
apply_single_qubit_gate(q, U2)Apply single-qubit gate (fast path, no truncation needed)
apply_two_qubit_gate(qubit1, qubit2, gate)Apply two-qubit gate to adjacent qubits in MPS
apply_two_site_gate(i, U4)Apply two-qubit gate with adaptive SVD truncation
Alias for to_left_canonical() for compatibility
canonicalize_right_to_left()Bring MPS into right-canonical form using QR decomposition
cnot(control, target)CNOT gate (uses Triton-optimized two-site gate)
cx(control, target)Alias for CNOT
cy(control, target)Controlled-Y gate
cz(q0, q1)Controlled-Z gate
from_numpy_mps(numpy_mps_dict[, device])Create PyTorch MPS from NumPy MPS dictionary
get_amplitude(basis_state)Contract MPS to get amplitude - O(n × χ²)
Get total memory usage in bytes
get_probability(basis_state)Get measurement probability for a basis state
Get global error upper bound
h(qubit)Hadamard gate
load_snapshot(path[, device])Load MPS from checkpoint file
measure([num_shots])Simulate measurement with accurate MPS sampling
memory_usage()Memory usage in bytes
Reset statistics tracking
rx(qubit, theta)Rotation around X axis
ry(qubit, theta)Rotation around Y axis
rz(qubit, theta)Rotation around Z axis
s(qubit)Phase gate (S gate)
sample([num_shots])Sample measurement outcomes - delegates to parent class
sdg(qubit)S dagger gate
snapshot(path)Save MPS to file for checkpointing
Get summary statistics
swap(q0, q1)SWAP gate
sweep_sample([num_shots])Accurate MPS sampling using conditional probabilities sweep
t(qubit)T gate
tdg(qubit)T dagger gate
Bring MPS into left-canonical form using QR
to_mixed_canonical(center)Bring MPS into mixed-canonical form with center at specified site
to_numpy_mps()Convert to NumPy MPS for compatibility
Convert MPS to full statevector (ONLY for small systems!)
x(qubit)Pauli X gate
y(qubit)Pauli Y gate
z(qubit)Pauli Z gate
- __init__(num_qubits, bond_dim=8, *, eps_bond=1e-06, chi_max_per_bond=256, budget_global_mb=None, dtype_policy=DTypePolicy(default=torch.complex64, promote_if_cond_gt=1000000.0), device='cuda', dtype=None)[source]#
Initialize Adaptive MPS
- Args:
num_qubits: Number of qubits bond_dim: Initial bond dimension eps_bond: Energy tolerance for truncation (default 1e-6) chi_max_per_bond: Max χ per bond (int or list of ints) budget_global_mb: Global memory budget in MB (None = unlimited) dtype_policy: Mixed precision policy device: ‘cuda’ or ‘cpu’ dtype: Explicit dtype (overrides dtype_policy.default if provided)
- apply_single_qubit_gate(q, U2)[source]#
Apply single-qubit gate (fast path, no truncation needed)
- Args:
q: Qubit index U2: 2x2 unitary gate
Complexity: O(χ²)
- apply_two_site_gate(i, U4)[source]#
Apply two-qubit gate with adaptive SVD truncation
This is the core TEBD operation for moderate entanglement.
- Args:
i: Bond index (applies to qubits i and i+1) U4: 4x4 unitary gate (or 2x2x2x2 tensor)
Steps: 1. Merge tensors at sites (i, i+1) into Θ 2. Apply gate U 3. SVD: Θ = U S V† 4. Adaptively select rank k by energy criterion + caps 5. Split back into two cores with updated χ
Complexity: O(χ³) for SVD
- to_left_canonical()[source]#
Bring MPS into left-canonical form using QR
After this, each tensor A^[i] satisfies: Σ_s (A^[i]_s)† A^[i]_s = I
Complexity: O(n · χ³)
- to_mixed_canonical(center)[source]#
Bring MPS into mixed-canonical form with center at specified site
Sites 0..center-1 are left-canonical Sites center+1..n-1 are right-canonical Site center holds the normalization
- Args:
center: Center site index
Complexity: O(n · χ³)
- static load_snapshot(path, device='cuda')[source]#
Load MPS from checkpoint file
- Args:
path: File path to load from device: Device to place tensors on
- Returns:
Loaded AdaptiveMPS instance
Overview#
The adaptive_mps module provides adaptive Matrix Product State simulation with intelligent resource management. Unlike fixed bond dimension approaches, adaptive MPS dynamically adjusts χ based on entanglement structure, memory constraints, and accuracy requirements.
Key Features#
Per-bond adaptation: Individual χ limits for each bond
Global memory budgets: Automatic χ reduction to fit memory constraints
Mixed precision: Automatic promotion to float64 when condition numbers are high
Error tracking: Comprehensive statistics on truncation errors and resource usage
GPU-optimized: Native CUDA support with efficient memory management
Canonicalization: Left, right, and mixed canonical forms for numerical stability
Why Adaptive MPS?#
Fixed bond dimension wastes resources:
High entanglement regions need large χ
Low entanglement regions can use small χ
Fixed χ either wastes memory or loses accuracy
Adaptive bond dimension optimizes dynamically:
- where:
\(\chi_{\text{max},i}\) is the per-bond maximum
Budget is the global memory limit
\(\chi_{\text{SVD}}(\epsilon)\) is determined by truncation threshold ε
Mathematical Background#
Matrix Product State Representation#
An n-qubit quantum state is represented as:
- where each tensor \(A^{[i]}_{s_i}\) has shape \([\chi_{i-1}, d, \chi_i]\) with:
\(d = 2\) for qubits
\(\chi_i\) is the bond dimension between sites i and i+1
Memory scaling: \(O(n \chi^2 d)\) vs. \(O(d^n)\) for full statevector.
Adaptive Truncation#
After applying a two-qubit gate, the bond dimension can grow. SVD is used to truncate:
where k singular values are kept such that:
Adaptive strategy: Choose k differently for each bond based on:
Error threshold ε: User-specified truncation tolerance
Per-bond cap \(\chi_{\text{max},i}\): Maximum allowed at bond i
Memory budget: Global constraint across all bonds
Canonical Forms#
MPS can be brought into canonical forms for numerical stability:
Left-canonical: \(\sum_{s,\alpha} |A^{[i]}_{s,\alpha,\beta}|^2 = \delta_{\alpha,\beta}\)
Right-canonical: \(\sum_{s,\beta} |A^{[i]}_{s,\alpha,\beta}|^2 = \delta_{\alpha,\beta}\)
- Mixed-canonical (centered at site c):
Sites 1 to c-1: left-canonical
Site c: general tensor
Sites c+1 to n: right-canonical
Advantage: Simplifies expectation value calculations and improves numerical stability.
Classes#
Adaptive MPS for moderate-to-high entanglement simulation |
|
Mixed precision policy configuration |
AdaptiveMPS#
- class atlas_q.adaptive_mps.AdaptiveMPS(num_qubits, bond_dim=8, *, eps_bond=1e-06, chi_max_per_bond=256, budget_global_mb=None, dtype_policy=DTypePolicy(default=torch.complex64, promote_if_cond_gt=1000000.0), device='cuda', dtype=None)[source]#
Bases:
MatrixProductStatePyTorchAdaptive MPS for moderate-to-high entanglement simulation
Key features: - Variable per-bond dimensions with adaptive truncation - Energy-based rank selection: keep k such that Σ_{i≤k} σ_i² ≥ (1-ε²) Σ_i σ_i² - Per-bond χ caps and global memory budget enforcement - Mixed precision with automatic promotion on numerical instability - Two-site gate application (TEBD-style) - Comprehensive statistics and error tracking
- Example:
>>> mps = AdaptiveMPS(16, bond_dim=8, eps_bond=1e-6, chi_max_per_bond=64) >>> H = torch.tensor([[1,1],[1,-1]], dtype=torch.complex64)/torch.sqrt(torch.tensor(2.0)) >>> for q in range(16): >>> mps.apply_single_qubit_gate(q, H) >>> CZ = torch.diag(torch.tensor([1,1,1,-1], dtype=torch.complex64)) >>> for i in range(0, 15, 2): >>> mps.apply_two_site_gate(i, CZ) >>> print(mps.stats_summary())
Methods
apply_single_qubit_gate(q, U2)Apply single-qubit gate (fast path, no truncation needed)
apply_two_qubit_gate(qubit1, qubit2, gate)Apply two-qubit gate to adjacent qubits in MPS
apply_two_site_gate(i, U4)Apply two-qubit gate with adaptive SVD truncation
Alias for to_left_canonical() for compatibility
canonicalize_right_to_left()Bring MPS into right-canonical form using QR decomposition
cnot(control, target)CNOT gate (uses Triton-optimized two-site gate)
cx(control, target)Alias for CNOT
cy(control, target)Controlled-Y gate
cz(q0, q1)Controlled-Z gate
from_numpy_mps(numpy_mps_dict[, device])Create PyTorch MPS from NumPy MPS dictionary
get_amplitude(basis_state)Contract MPS to get amplitude - O(n × χ²)
Get total memory usage in bytes
get_probability(basis_state)Get measurement probability for a basis state
Get global error upper bound
h(qubit)Hadamard gate
load_snapshot(path[, device])Load MPS from checkpoint file
measure([num_shots])Simulate measurement with accurate MPS sampling
memory_usage()Memory usage in bytes
Reset statistics tracking
rx(qubit, theta)Rotation around X axis
ry(qubit, theta)Rotation around Y axis
rz(qubit, theta)Rotation around Z axis
s(qubit)Phase gate (S gate)
sample([num_shots])Sample measurement outcomes - delegates to parent class
sdg(qubit)S dagger gate
snapshot(path)Save MPS to file for checkpointing
Get summary statistics
swap(q0, q1)SWAP gate
sweep_sample([num_shots])Accurate MPS sampling using conditional probabilities sweep
t(qubit)T gate
tdg(qubit)T dagger gate
Bring MPS into left-canonical form using QR
to_mixed_canonical(center)Bring MPS into mixed-canonical form with center at specified site
to_numpy_mps()Convert to NumPy MPS for compatibility
Convert MPS to full statevector (ONLY for small systems!)
x(qubit)Pauli X gate
y(qubit)Pauli Y gate
z(qubit)Pauli Z gate
Main class for adaptive Matrix Product State simulation.
Provides dynamic bond dimension management, error tracking, and efficient gate application. Supports both single-site and two-site updates with automatic truncation.
Constructor:
mps = AdaptiveMPS( num_qubits=20, bond_dim=32, # Initial χ chi_max_per_bond=256, # Per-bond maximum budget_global_mb=4096, # 4GB memory budget eps_bond=1e-8, # Truncation threshold dtype_policy=None, # Mixed precision policy device='cuda' )
Parameters:
num_qubits(int): Number of qubits in the systembond_dim(int): Initial bond dimension χ (default: 16)chi_max_per_bond(int or list): Maximum χ per bond. If int, applied uniformly. If list, per-bond limits.budget_global_mb(float): Global memory budget in megabyteseps_bond(float): Truncation tolerance for SVD (default: 1e-8)dtype_policy(DTypePolicy): Mixed precision configurationdevice(str): ‘cpu’ or ‘cuda’
Storage: Approximately \(16 n \chi^2\) bytes for complex64
Methods
__init__(num_qubits[, bond_dim, eps_bond, ...])Initialize Adaptive MPS
apply_single_qubit_gate(q, U2)Apply single-qubit gate (fast path, no truncation needed)
apply_two_site_gate(i, U4)Apply two-qubit gate with adaptive SVD truncation
Get summary statistics
Get global error upper bound
Reset statistics tracking
memory_usage()Memory usage in bytes
Bring MPS into left-canonical form using QR
Convert MPS to full statevector (ONLY for small systems!)
sample([num_shots])Sample measurement outcomes - delegates to parent class
measure([num_shots])Simulate measurement with accurate MPS sampling
- __init__(num_qubits, bond_dim=8, *, eps_bond=1e-06, chi_max_per_bond=256, budget_global_mb=None, dtype_policy=DTypePolicy(default=torch.complex64, promote_if_cond_gt=1000000.0), device='cuda', dtype=None)[source]#
Initialize Adaptive MPS
- Args:
num_qubits: Number of qubits bond_dim: Initial bond dimension eps_bond: Energy tolerance for truncation (default 1e-6) chi_max_per_bond: Max χ per bond (int or list of ints) budget_global_mb: Global memory budget in MB (None = unlimited) dtype_policy: Mixed precision policy device: ‘cuda’ or ‘cpu’ dtype: Explicit dtype (overrides dtype_policy.default if provided)
- apply_single_qubit_gate(q, U2)[source]#
Apply single-qubit gate (fast path, no truncation needed)
- Args:
q: Qubit index U2: 2x2 unitary gate
Complexity: O(χ²)
- apply_two_site_gate(i, U4)[source]#
Apply two-qubit gate with adaptive SVD truncation
This is the core TEBD operation for moderate entanglement.
- Args:
i: Bond index (applies to qubits i and i+1) U4: 4x4 unitary gate (or 2x2x2x2 tensor)
Steps: 1. Merge tensors at sites (i, i+1) into Θ 2. Apply gate U 3. SVD: Θ = U S V† 4. Adaptively select rank k by energy criterion + caps 5. Split back into two cores with updated χ
Complexity: O(χ³) for SVD
- to_left_canonical()[source]#
Bring MPS into left-canonical form using QR
After this, each tensor A^[i] satisfies: Σ_s (A^[i]_s)† A^[i]_s = I
Complexity: O(n · χ³)
- to_mixed_canonical(center)[source]#
Bring MPS into mixed-canonical form with center at specified site
Sites 0..center-1 are left-canonical Sites center+1..n-1 are right-canonical Site center holds the normalization
- Args:
center: Center site index
Complexity: O(n · χ³)
- static load_snapshot(path, device='cuda')[source]#
Load MPS from checkpoint file
- Args:
path: File path to load from device: Device to place tensors on
- Returns:
Loaded AdaptiveMPS instance
Key Methods#
apply_single_qubit_gate#
mps.apply_single_qubit_gate(qubit, gate_matrix)
Apply a single-qubit gate without changing bond dimensions.
- Parameters:
qubit(int): Target qubit indexgate_matrix(torch.Tensor): 2×2 unitary matrix
Complexity: O(χ²) where χ is bond dimension at qubit
Example:
import torch
from atlas_q.adaptive_mps import AdaptiveMPS
mps = AdaptiveMPS(num_qubits=10, bond_dim=16, device='cuda')
# Hadamard gate
H = torch.tensor([[1, 1], [1, -1]], dtype=torch.complex64, device='cuda') / (2**0.5)
mps.apply_single_qubit_gate(0, H)
# Pauli-X gate
X = torch.tensor([[0, 1], [1, 0]], dtype=torch.complex64, device='cuda')
mps.apply_single_qubit_gate(5, X)
apply_two_site_gate#
mps.apply_two_site_gate(site, gate_matrix)
Apply a two-qubit gate with adaptive truncation.
- Parameters:
site(int): Index of first qubit (gate acts on site and site+1)gate_matrix(torch.Tensor): 4×4 unitary matrix
Complexity: O(χ³) for SVD truncation
Example:
from atlas_q.adaptive_mps import AdaptiveMPS
import torch
mps = AdaptiveMPS(num_qubits=20, bond_dim=32, chi_max_per_bond=128, device='cuda')
# CNOT gate
CNOT = torch.tensor([
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 0, 1],
[0, 0, 1, 0]
], dtype=torch.complex64, device='cuda')
for i in range(19):
mps.apply_two_site_gate(i, CNOT)
print(f"Max bond dimension: {max(mps.bond_dimensions)}")
print(f"Memory usage: {mps.memory_usage() / 1024**2:.2f} MB")
expectation_value#
energy = mps.expectation_value(operator_mpo)
Compute expectation value \(\langle\psi|\hat{O}|\psi\rangle\).
- Parameters:
operator_mpo(MPO): Operator represented as Matrix Product Operator
- Returns:
complex: Expectation value
Complexity: O(n χ² D²) where D is MPO bond dimension
stats_summary#
stats = mps.stats_summary()
Get comprehensive statistics on MPS state.
- Returns:
Dictionary with: -
max_chi: Maximum bond dimension -avg_chi: Average bond dimension -total_params: Total number of MPS parameters -memory_mb: Memory usage in megabytes -num_truncations: Number of truncation operations performed -max_local_error: Maximum local truncation error -sum_squared_errors: Sum of squared truncation errors
Example:
from atlas_q.adaptive_mps import AdaptiveMPS
mps = AdaptiveMPS(num_qubits=30, bond_dim=64, device='cuda')
# ... apply gates ...
stats = mps.stats_summary()
print(f"χ_max = {stats['max_chi']}, χ_avg = {stats['avg_chi']:.1f}")
print(f"Memory: {stats['memory_mb']:.2f} MB")
print(f"Truncations: {stats['num_truncations']}")
print(f"Max error: {stats['max_local_error']:.2e}")
global_error_bound#
error = mps.global_error_bound()
Compute rigorous upper bound on accumulated truncation error.
- Returns:
float: Error bound δ such that \(\||\psi_{\text{true}}\rangle - |\psi_{\text{MPS}}\rangle\| \leq \delta\)
Formula:
where \(\epsilon_i\) are local truncation errors.
to_left_canonical / to_right_canonical#
mps.to_left_canonical()
mps.to_right_canonical()
Convert MPS to canonical form for numerical stability.
- Use cases:
Improve condition numbers before SVD operations
Simplify expectation value calculations
Prepare for TDVP time evolution
Complexity: O(n χ³)
DTypePolicy#
- class atlas_q.adaptive_mps.DTypePolicy(default=torch.complex64, promote_if_cond_gt=1000000.0)[source]#
Bases:
objectMixed precision policy configuration
Configuration for mixed-precision simulation with automatic promotion.
Constructor:
from atlas_q.adaptive_mps import DTypePolicy import torch policy = DTypePolicy( default=torch.complex64, promote_if_cond_gt=1e6 )
Parameters:
default(torch.dtype): Default data type (torch.complex64 or torch.complex128)promote_if_cond_gt(float): Threshold for automatic promotion. If condition number exceeds this value, promote to higher precision.
Strategy:
Use
defaultdtype (e.g., complex64) for most operations (2× memory savings)Monitor condition numbers during SVD
If cond(M) >
promote_if_cond_gt, promote to complex128 for that operationConvert result back to default dtype
Example:
from atlas_q.adaptive_mps import AdaptiveMPS, DTypePolicy import torch # Use complex64 by default, promote if condition number > 10^6 policy = DTypePolicy(default=torch.complex64, promote_if_cond_gt=1e6) mps = AdaptiveMPS( num_qubits=30, bond_dim=64, dtype_policy=policy, device='cuda' ) # MPS will automatically: # - Use complex64 for well-conditioned operations (2× faster, 2× less memory) # - Promote to complex128 when ill-conditioned (maintains accuracy)
Performance:
Metric
complex64
complex128
Adaptive
Memory
1.0×
2.0×
~1.1× (best)
Speed
1.0× (fastest)
0.5×
~0.9×
Accuracy (well-cond.)
Good
Excellent
Good
Accuracy (ill-cond.)
Poor
Excellent
Excellent
Performance Characteristics#
Computational Complexity#
Operation |
Fixed χ MPS |
Adaptive MPS |
|---|---|---|
Single-qubit gate |
O(χ²) |
O(χ²) |
Two-qubit gate |
O(χ³) |
O(χ³) + O(χ) check |
Canonicalization |
O(n χ³) |
O(n χ³) |
Expectation value |
O(n χ² D²) |
O(n χ² D²) |
Memory |
O(n χ²) |
O(n χ_avg²) (better) |
where D is MPO bond dimension.
Memory Savings#
Adaptive MPS reduces memory by using smaller χ where possible:
# Example: 30-qubit system with varying entanglement
Fixed χ=128: 30 × 128² × 8 bytes = 39.3 MB
Adaptive χ: χ ∈ [16, 32, 64, 128] → avg 50 → 6.0 MB
Memory savings: 6.5× with minimal accuracy loss
Benchmark Results#
From scripts/benchmarks/validate_all_features.py:
# 50-qubit quantum chemistry VQE
Fixed χ=128: Memory=156 MB, Time=2.5 sec, Error=1e-7
Adaptive: Memory= 45 MB, Time=2.2 sec, Error=1e-7
Savings: 3.5× memory, 12% faster (less data movement)
# 100-qubit random circuit (depth=50)
Fixed χ=64: Memory= 31 MB, Time=5.0 sec, χ insufficient → Error=1e-3
Adaptive: Memory= 48 MB, Time=5.8 sec, Error=1e-7
Result: Higher accuracy with 55% more memory (still practical)
Examples#
Basic Usage#
from atlas_q.adaptive_mps import AdaptiveMPS
import torch
# Create MPS
mps = AdaptiveMPS(num_qubits=10, bond_dim=8, device='cuda')
# Apply gates
H = torch.tensor([[1, 1], [1, -1]], dtype=torch.complex64, device='cuda') / (2**0.5)
for q in range(10):
mps.apply_single_qubit_gate(q, H)
# Check statistics
stats = mps.stats_summary()
print(f"Max χ: {stats['max_chi']}")
print(f"Global error: {mps.global_error_bound():.2e}")
Adaptive Truncation with Memory Budget#
from atlas_q.adaptive_mps import AdaptiveMPS
# Limit memory to 2 GB
mps = AdaptiveMPS(
num_qubits=50,
bond_dim=32, # Initial χ
chi_max_per_bond=256, # Per-bond max
budget_global_mb=2048, # 2 GB limit
eps_bond=1e-8,
device='cuda'
)
# Apply gates - χ will adapt automatically
CNOT = torch.tensor([[1,0,0,0],[0,1,0,0],[0,0,0,1],[0,0,1,0]],
dtype=torch.complex64, device='cuda')
for i in range(49):
mps.apply_two_site_gate(i, CNOT)
print(f"Bond dimensions: {mps.bond_dimensions}")
print(f"Memory usage: {mps.memory_usage() / 1024**2:.2f} MB")
print(f"Within budget: {mps.memory_usage() / 1024**2 <= 2048}")
Mixed Precision for Accuracy#
from atlas_q.adaptive_mps import AdaptiveMPS, DTypePolicy
import torch
# Mixed precision with automatic promotion
policy = DTypePolicy(default=torch.complex64, promote_if_cond_gt=1e6)
mps = AdaptiveMPS(
num_qubits=30,
bond_dim=64,
dtype_policy=policy,
device='cuda'
)
# Apply ill-conditioned gates
# MPS will automatically promote to complex128 when needed
for i in range(29):
# Some gates create ill-conditioned matrices
U = random_unitary(4, device='cuda')
mps.apply_two_site_gate(i, U)
# Check how many promotions occurred
# (This info would be in debug logs)
Per-Bond Adaptation#
from atlas_q.adaptive_mps import AdaptiveMPS
# Different χ limits for different regions
# High entanglement in center, low at edges
chi_per_bond = [32] * 10 + [128] * 20 + [32] * 9 # 40 qubits
mps = AdaptiveMPS(
num_qubits=40,
bond_dim=16, # Initial
chi_max_per_bond=chi_per_bond, # Per-bond limits
eps_bond=1e-8,
device='cuda'
)
# Center bonds can grow to 128, edges limited to 32
# Automatically optimizes memory usage based on entanglement structure
Measurement and Sampling#
from atlas_q.adaptive_mps import AdaptiveMPS
mps = AdaptiveMPS(num_qubits=20, bond_dim=64, device='cuda')
# ... prepare state ...
# Single qubit measurement
outcome, probability = mps.measure(qubit=5, collapse=True)
print(f"Measured {outcome} with P={probability:.4f}")
# Sample multiple times
outcomes = [mps.sample() for _ in range(1000)]
from collections import Counter
histogram = Counter(outcomes)
print(f"Histogram: {histogram}")
Canonicalization for Stability#
from atlas_q.adaptive_mps import AdaptiveMPS
mps = AdaptiveMPS(num_qubits=30, bond_dim=64, device='cuda')
# After many operations, numerical errors accumulate
# ... 1000s of gate applications ...
# Restore numerical stability
mps.to_left_canonical()
mps.normalize()
# Check norm
norm = torch.sqrt(mps.inner_product(mps).real)
print(f"Norm: {norm:.10f}") # Should be ~1.0
Use Cases#
When to Use Adaptive MPS#
Unknown entanglement structure: Don’t know a priori which regions need high χ
Memory constraints: Limited GPU memory but need to simulate as many qubits as possible
Variable entanglement: System has regions of high and low entanglement
Long simulations: Numerical stability important over many operations
Production systems: Need guaranteed memory bounds
When to Use Fixed χ#
Predictable entanglement: Know exact χ requirements beforehand
Performance critical: Need absolute maximum speed (no adaptive overhead)
Small systems: χ requirements low enough that adaptation unnecessary
Benchmarking: Comparing against fixed-χ results in literature
Cross-References#
See Also#
Adaptive Truncation - Theory of adaptive truncation
How to Handle Large Quantum Systems - Practical tips for large-scale simulation
MPS PyTorch Backend - Basic MPS implementation
atlas_q.diagnostics - Statistics and monitoring
atlas_q.linalg_robust - Robust linear algebra with automatic fallbacks
atlas_q.truncation - Truncation strategies and error bounds
References#
Key papers:
Schollwöck, U. (2011). “The density-matrix renormalization group in the age of matrix product states.” Annals of Physics, 326(1), 96-192.
Paeckel, S. et al. (2019). “Time-evolution methods for matrix-product states.” Annals of Physics, 411, 167998.
Hubig, C. et al. (2015). “Strictly single-site DMRG algorithm with subspace expansion.” Physical Review B, 91(15), 155115.