Installation#
Prerequisites#
ATLAS-Q requires:
Python 3.9 or higher
PyTorch 2.0 or higher
NumPy 1.22 or higher
SciPy 1.10 or higher
Optional dependencies:
Triton 2.0+ for GPU acceleration (Linux only)
cuQuantum 23.0+ for NVIDIA GPU optimization
PySCF 2.0+ for molecular Hamiltonians
OpenFermion 1.5+ for fermionic operators
Hardware:
CPU: Any modern x86_64 processor
GPU: NVIDIA GPU with CUDA support (compute capability 7.0+, recommended 8.0+)
Tested on: V100, A100, H100, RTX 4090
GPU memory: 4GB minimum, 16GB+ recommended for large simulations
Installation Methods#
PyPI Installation (Recommended)#
Standard installation:
pip install atlas-quantum
With GPU support (includes Triton kernels):
pip install atlas-quantum[gpu]
With molecular chemistry support:
pip install atlas-quantum[chemistry]
With all optional dependencies:
pip install atlas-quantum[all]
Virtual environment recommended:
python -m venv atlas-env
source atlas-env/bin/activate # Linux/macOS
atlas-env\Scripts\activate # Windows
pip install atlas-quantum[gpu]
Docker Installation#
Pull the GPU-enabled image:
docker pull ghcr.io/followthesapper/atlas-q:cuda
Run interactive session:
docker run --rm -it --gpus all ghcr.io/followthesapper/atlas-q:cuda python3
Run with volume mounting for persistent data:
docker run --rm -it --gpus all \
-v $(pwd)/data:/data \
ghcr.io/followthesapper/atlas-q:cuda python3
CPU-only image:
docker pull ghcr.io/followthesapper/atlas-q:cpu
docker run --rm -it ghcr.io/followthesapper/atlas-q:cpu python3
Run benchmarks in Docker:
docker run --rm --gpus all ghcr.io/followthesapper/atlas-q:cuda \
python3 /opt/atlas-q/scripts/benchmarks/validate_all_features.py
Building from Source#
Clone the repository:
git clone https://github.com/followthesapper/ATLAS-Q.git
cd ATLAS-Q
Install in development mode:
pip install -e .
With GPU support:
pip install -e .[gpu]
Configure GPU acceleration (auto-detects your GPU):
./setup_triton.sh
This script:
Detects GPU architecture (V100, A100, H100, etc.)
Sets
TORCH_CUDA_ARCH_LISTenvironment variableConfigures
TRITON_PTXAS_PATHAdds settings to
~/.bashrcfor persistence
Run tests:
pytest tests/ -v
Run benchmarks:
python scripts/benchmarks/validate_all_features.py
Build requirements:
GCC 9+ or Clang 10+ (Linux)
MSVC 2019+ (Windows)
CUDA Toolkit 11.8+ (for GPU support)
Git
Google Colab / Jupyter#
Open the interactive notebook directly in Google Colab:
Open ATLAS_Q_Demo.ipynb in Colab
Or download and run locally:
wget https://github.com/followthesapper/ATLAS-Q/raw/ATLAS-Q/ATLAS_Q_Demo.ipynb
jupyter notebook ATLAS_Q_Demo.ipynb
Install in Colab session:
!pip install atlas-quantum[gpu]
Conda-forge (Coming Soon)#
Conda-forge package is under review. Once approved:
conda install -c conda-forge atlas-quantum
APT Installation (Future)#
Debian/Ubuntu package planned for future release.
UV Installation (Future)#
UV package manager support planned.
Verification#
Verify installation:
import atlas_q
print(atlas_q.__version__)
Check GPU availability:
import torch
print(f"CUDA available: {torch.cuda.is_available()}")
if torch.cuda.is_available():
print(f"CUDA device: {torch.cuda.get_device_name(0)}")
Check Triton availability:
try:
import triton
print(f"Triton available: {triton.__version__}")
except ImportError:
print("Triton not installed (GPU kernels disabled)")
Check cuQuantum availability:
from atlas_q import get_cuquantum
cuq = get_cuquantum()
print(f"cuQuantum available: {cuq['is_cuquantum_available']()}")
if cuq['is_cuquantum_available']():
print(f"cuQuantum version: {cuq['get_cuquantum_version']()}")
Run basic test:
from atlas_q import get_quantum_sim
QCH, _, _, _ = get_quantum_sim()
sim = QCH()
factors = sim.factor_number(21)
assert factors[0] * factors[1] == 21
print("Installation verified")
Troubleshooting#
CUDA/GPU Issues#
If CUDA is not detected:
Verify NVIDIA driver installation:
nvidia-smi
Check PyTorch CUDA installation:
import torch print(torch.version.cuda) print(torch.cuda.is_available())
Reinstall PyTorch with CUDA support:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
Triton Compilation Errors#
If Triton kernels fail to compile:
Check CUDA Toolkit installation:
nvcc --versionSet architecture manually:
export TORCH_CUDA_ARCH_LIST="8.0" # For A100 export TORCH_CUDA_ARCH_LIST="9.0" # For H100
Verify
ptxaspath:export TRITON_PTXAS_PATH="/usr/local/cuda/bin/ptxas"
Memory Errors#
If encountering out-of-memory errors:
Reduce bond dimension:
mps = AdaptiveMPS(num_qubits=20, bond_dim=32, chi_max_per_bond=64)
Enable memory budgets:
mps = AdaptiveMPS(num_qubits=20, bond_dim=32, budget_global_mb=4096)
Use mixed precision:
from atlas_q.adaptive_mps import DTypePolicy policy = DTypePolicy(default=torch.complex64) mps = AdaptiveMPS(num_qubits=20, bond_dim=32, dtype_policy=policy)
Import Errors#
If optional dependencies are missing:
pip install pyscf openfermion openfermionpyscf # For chemistry
pip install cuquantum-python # For cuQuantum
pip install triton # For GPU kernels
cuQuantum Issues#
cuQuantum is optional. If unavailable, ATLAS-Q automatically falls back to PyTorch implementations. To install:
pip install cuquantum-python
Requires CUDA Toolkit and compatible NVIDIA GPU.
Version Compatibility#
Tested configurations:
Python 3.9, 3.10, 3.11, 3.12
PyTorch 2.0, 2.1, 2.2, 2.3
CUDA 11.8, 12.0, 12.1
Triton 2.0, 2.1, 2.2
cuQuantum 23.0, 24.0, 25.0
Minimum versions:
Python: 3.9
PyTorch: 2.0
NumPy: 1.22
SciPy: 1.10
Platform support:
Linux: Full support (CUDA + Triton + cuQuantum)
macOS: CPU only (no CUDA/Triton)
Windows: CUDA support (Triton limited)