ANN-to-SNN Conversion¶
Convert trained PyTorch ANNs to rate-coded spiking neural networks.
Converter¶
sc_neurocore.conversion.ann_to_snn
¶
Convert trained PyTorch ANNs to rate-coded spiking neural networks.
The conversion replaces ReLU activations with IF (integrate-and-fire) neurons and uses weight/threshold normalization to preserve accuracy. Rate coding: ANN activation a maps to spike rate a/theta over T steps.
Pipeline
- Extract weights and biases from PyTorch Sequential model
- Compute per-layer activation statistics (max, percentile)
- Normalize weights so that max activation = threshold
- Build an SNN with IF neurons that reproduces the ANN output as spike counts over T timesteps
Reference: Diehl et al. 2015 — "Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing"
ConvertedSNN
dataclass
¶
Rate-coded SNN converted from an ANN.
Attributes¶
weights : list of ndarray Per-layer weight matrices. biases : list of ndarray or None Per-layer biases (None if absent). thresholds : list of float Per-layer firing thresholds after normalization. T : int Number of simulation timesteps. n_layers : int Number of layers.
Source code in src/sc_neurocore/conversion/ann_to_snn.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 | |
run(x)
¶
Run the converted SNN for T timesteps on input x.
Parameters¶
x : ndarray of shape (n_input,) or (batch, n_input) Input values in [0, 1]. Converted to Poisson spike trains.
Returns¶
ndarray of shape (n_output,) or (batch, n_output) Output spike counts over T timesteps (unnormalized).
Source code in src/sc_neurocore/conversion/ann_to_snn.py
67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 | |
classify(x)
¶
Run SNN and return predicted class indices.
Source code in src/sc_neurocore/conversion/ann_to_snn.py
112 113 114 115 | |
convert(model, calibration_data=None, T=16, percentile=99.9)
¶
Convert a trained PyTorch ANN to a rate-coded SNN.
Parameters¶
model : nn.Module Trained PyTorch model (Sequential with Linear + ReLU). calibration_data : Tensor, optional Sample input batch for threshold calibration. If None, uses default threshold of 1.0 per layer. T : int Number of simulation timesteps (higher = more accurate, slower). percentile : float Activation percentile for threshold normalization.
Returns¶
ConvertedSNN Converted spiking network ready to run.
Source code in src/sc_neurocore/conversion/ann_to_snn.py
157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 | |
QCFS Activation¶
sc_neurocore.conversion.qcfs
¶
QCFS (Quantization-Clip-Floor-Shift) activation function.
Replaces ReLU in the ANN during conversion-aware training or post-hoc conversion. QCFS approximates the rate-coded SNN firing rate as a quantized step function, minimizing conversion error.
Reference: Bu et al. 2022 — "Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks"
QCFSActivation
¶
Bases: Module
QCFS activation: quantized clip-floor-shift ReLU replacement.
For T timesteps and threshold theta
QCFS(x) = clip(floor(x * T / theta + 0.5), 0, T) * theta / T
This quantizes activations to T+1 levels in [0, theta], matching the achievable spike rates of an IF neuron over T timesteps.
Parameters¶
T : int Number of simulation timesteps. theta : float Firing threshold (default 1.0). learn_theta : bool Make threshold trainable (default False).
Source code in src/sc_neurocore/conversion/qcfs.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | |