Tutorial 38: ANN-to-SNN Conversion¶
SC-NeuroCore converts trained PyTorch ANNs to rate-coded spiking neural networks. Train with standard PyTorch, convert in one function call, deploy to FPGA via the SC pipeline.
Why Convert?¶
Most practitioners have trained ANNs, not SNNs. Conversion bridges the gap: use PyTorch's mature ecosystem for training, then convert to an SNN that runs on neuromorphic hardware with the energy efficiency of spike-based computation.
1. Train a PyTorch ANN (Standard)¶
import torch
import torch.nn as nn
model = nn.Sequential(
nn.Linear(784, 256),
nn.ReLU(),
nn.Linear(256, 128),
nn.ReLU(),
nn.Linear(128, 10),
)
# Train with standard optimizer...
2. Convert to SNN¶
from sc_neurocore.conversion import convert
calibration_data = torch.randn(100, 784) # representative inputs
snn = convert(
model,
calibration_data=calibration_data,
T=32, # simulation timesteps (higher = more accurate)
)
print(f"Converted: {snn.n_layers} layers, T={snn.T}")
3. Run the Converted SNN¶
import numpy as np
x = np.random.rand(784) # input in [0, 1]
spike_counts = snn.run(x)
prediction = snn.classify(x)
print(f"Prediction: {prediction}")
Batch inference:
x_batch = np.random.rand(100, 784)
predictions = snn.classify(x_batch)
4. How It Works¶
The conversion pipeline:
- Extract weights from each
nn.Linearlayer - Calibrate thresholds: run calibration data through the ANN, record per-layer activation statistics (99.9th percentile)
- Normalize weights: scale so max activation maps to firing threshold
- Build IF neurons: each ReLU becomes an integrate-and-fire neuron with threshold from calibration
- Rate coding: input values become Poisson spike trains over T steps. ANN activation a maps to spike count a*T/threshold.
5. QCFS Activation for Training¶
For higher accuracy, replace ReLU with QCFS during ANN training. QCFS quantizes activations to T+1 levels, matching achievable SNN spike rates:
from sc_neurocore.conversion import QCFSActivation
model = nn.Sequential(
nn.Linear(784, 256),
QCFSActivation(T=16),
nn.Linear(256, 10),
)
# Train with QCFS — accuracy will be slightly lower than ReLU
# but conversion to SNN will be nearly lossless
6. Deploy to FPGA¶
After conversion, the SNN weights can be compiled to Verilog:
# Save weights, then deploy
sc-neurocore deploy model_weights.pt --target artix7 -o build/
Accuracy vs Timesteps¶
| T (timesteps) | Accuracy | Latency |
|---|---|---|
| 4 | ~85% | Very fast |
| 8 | ~90% | Fast |
| 16 | ~94% | Moderate |
| 32 | ~96% | Slow |
| 64 | ~97% | Very slow |
Higher T improves accuracy at the cost of more clock cycles per inference. The sweet spot depends on your accuracy-latency tradeoff.
Further Reading¶
- Tutorial 03: Surrogate Gradient Training — direct SNN training
- Tutorial 33: Equation-to-Verilog — compile to hardware
- API: Conversion — auto-generated API docs