Contrastive Self-Supervised Learning — InfoNCE + CSDP¶
Self-supervised learning for SNNs without labeled data. Two complementary approaches: global InfoNCE loss for batch training and local CSDP rule for biologically plausible on-chip learning.
SpikeContrastiveLoss — InfoNCE for Spikes¶
Adapted InfoNCE contrastive loss for spike-rate representations. Given two augmented views of the same batch, positive pairs = same input (different augmentation), negative pairs = different inputs. The loss encourages representations of the same input to be similar, and different inputs to be dissimilar.
loss = -mean(log(exp(sim(a_i, b_i)/τ) / Σ_j exp(sim(a_i, b_j)/τ)))
| Parameter | Default | Meaning |
|---|---|---|
temperature |
0.5 | Contrastive temperature scaling |
Returns 0.0 for batch size < 2 (no negatives possible).
CSDPRule — Contrastive Signal-Dependent Plasticity¶
Biologically plausible local learning rule. Generalizes the Forward-Forward algorithm to spiking circuits:
- Positive phase: Present real data → Hebbian update:
dW = lr * (post ⊗ pre) - decay * W - Negative phase: Present corrupted data → anti-Hebbian update:
dW = -lr * (post ⊗ pre) - Goodness:
g = Σ(activations²)— positive data should have high goodness, negative data low
| Parameter | Default | Meaning |
|---|---|---|
lr |
0.01 | Learning rate |
decay |
0.001 | Weight decay |
Usage¶
from sc_neurocore.contrastive import SpikeContrastiveLoss, CSDPRule
import numpy as np
# InfoNCE training
loss_fn = SpikeContrastiveLoss(temperature=0.5)
view_a = np.random.randn(32, 128) # batch of 32, 128 features
view_b = np.random.randn(32, 128) # augmented version
loss = loss_fn.compute(view_a, view_b)
# CSDP local learning
csdp = CSDPRule(lr=0.01)
W = np.random.randn(64, 32) * 0.1
W = csdp.contrastive_step(
W,
pos_pre=real_spikes, pos_post=real_activations,
neg_pre=noise_spikes, neg_post=noise_activations,
)
Reference: Ororbia 2024, Science Advances.
See Tutorial 80: Contrastive SSL.
sc_neurocore.contrastive.ssl
¶
Contrastive self-supervised learning for SNNs.
SpikeContrastiveLoss: InfoNCE-style loss for spike representations. CSDPRule: Contrastive Signal-Dependent Plasticity — biologically plausible local learning rule (Science Advances 2024).
No SNN library ships self-supervised learning utilities.
SpikeContrastiveLoss
¶
InfoNCE contrastive loss adapted for spike representations.
Computes similarity between spike-rate vectors from two augmented views of the same input. Positive pairs = same input, different augmentation. Negative pairs = different inputs.
Parameters¶
temperature : float Contrastive temperature scaling.
Source code in src/sc_neurocore/contrastive/ssl.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | |
compute(view_a, view_b)
¶
Compute contrastive loss for a batch of spike-rate pairs.
Parameters¶
view_a : ndarray of shape (batch, n_features) Spike rates from augmentation A. view_b : ndarray of shape (batch, n_features) Spike rates from augmentation B.
Returns¶
float — InfoNCE loss
Source code in src/sc_neurocore/contrastive/ssl.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | |
CSDPRule
dataclass
¶
Contrastive Signal-Dependent Plasticity.
Local learning rule: weight update depends on (pre, post, contrastive_signal). Positive phase: present real data → Hebbian update. Negative phase: present corrupted data → anti-Hebbian update.
Generalizes Forward-Forward to spiking circuits.
Reference: Ororbia 2024, Science Advances
Parameters¶
lr : float Learning rate. decay : float Weight decay for regularization.
Source code in src/sc_neurocore/contrastive/ssl.py
82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 | |
positive_update(weights, pre_spikes, post_spikes)
¶
Hebbian update from positive (real) data.
dW = lr * (post @ pre^T) - decay * W
Source code in src/sc_neurocore/contrastive/ssl.py
105 106 107 108 109 110 111 112 113 114 115 116 | |
negative_update(weights, pre_spikes, post_spikes)
¶
Anti-Hebbian update from negative (corrupted) data.
dW = -lr * (post @ pre^T)
Source code in src/sc_neurocore/contrastive/ssl.py
118 119 120 121 122 123 124 125 126 127 128 129 | |
contrastive_step(weights, pos_pre, pos_post, neg_pre, neg_post)
¶
Full contrastive update: positive + negative phase.
Source code in src/sc_neurocore/contrastive/ssl.py
131 132 133 134 135 136 137 138 139 140 141 142 | |
goodness(activations)
¶
Compute 'goodness' score (sum of squared activations).
Positive data should have high goodness, negative data low.
Source code in src/sc_neurocore/contrastive/ssl.py
144 145 146 147 148 149 | |