Continual Learning — EWC + On-Chip Plasticity¶
Train with backprop, deploy with local plasticity. Combines Elastic Weight Consolidation (EWC) for catastrophic forgetting protection with STDP-based local learning rules that can run on-chip.
Pipeline¶
- Train task A with standard backprop
- Compute Fisher diagonal from per-sample gradients → identifies important parameters
- Train task B with EWC penalty:
L_ewc = (λ/2) * Σ F_i * (θ_i - θ*_i)² - Extract plasticity configs for on-chip deployment (STDP parameters derived from weight statistics)
- Deploy with active on-chip plasticity rules
No framework provides the integrated pipeline from "trained model" to "deployed model with active on-chip plasticity."
Components¶
ContinualLearner— Main engine managing EWC and plasticity extraction.
| Parameter | Default | Meaning |
|---|---|---|
weights |
(required) | List of weight matrices per layer |
layer_names |
auto | Names for each layer |
ewc_lambda |
1000.0 | EWC regularization strength |
plasticity_rule |
"stdp" | Default on-chip plasticity rule |
Key methods:
compute_fisher(gradients_per_sample)— Compute Fisher Information diagonal from per-sample gradientsewc_penalty()→ float — Current EWC regularization penaltyregister_task(accuracy)— Register task completionupdate_weights(new_weights)— Update weights after trainingextract_plasticity_configs()→ list ofPlasticityConfig— Derive on-chip deployment parameters-
report()→ContinualReport— Full report with accuracy history -
PlasticityConfig— Per-layer on-chip plasticity configuration (rule, tau_pre/post, LR, weight bounds, homeostatic target). ContinualReport— Report dataclass withsummary()method.
Usage¶
from sc_neurocore.continual import ContinualLearner
import numpy as np
weights = [np.random.randn(64, 32) * 0.3, np.random.randn(10, 64) * 0.3]
learner = ContinualLearner(weights, layer_names=["hidden", "output"])
# After training task A: compute Fisher
gradients = [[np.random.randn(64, 32), np.random.randn(10, 64)] for _ in range(100)]
learner.compute_fisher(gradients)
learner.register_task(accuracy=0.95)
# Training task B: EWC penalty prevents forgetting
print(f"EWC penalty: {learner.ewc_penalty():.4f}")
# Deploy with on-chip plasticity
configs = learner.extract_plasticity_configs()
for c in configs:
print(f"{c.layer_name}: rule={c.rule}, lr+={c.lr_potentiation:.4f}")
Reference: Kirkpatrick et al. 2017 — "Overcoming catastrophic forgetting in neural networks" (EWC).
See Tutorial 58: Continual Learning.
sc_neurocore.continual
¶
Continual learning: train → deploy → adapt without catastrophic forgetting.
ContinualLearner
¶
Continual learning engine with EWC and on-chip plasticity extraction.
Parameters¶
weights : list of ndarray Initial trained weight matrices per layer. layer_names : list of str Names for each layer. ewc_lambda : float Regularization strength for EWC (0 = no protection). plasticity_rule : str Default on-chip plasticity rule for all layers.
Source code in src/sc_neurocore/continual/engine.py
89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 | |
compute_fisher(gradients_per_sample)
¶
Compute Fisher Information diagonal from per-sample gradients.
Parameters¶
gradients_per_sample : list of (list of ndarray) Outer list: samples. Inner list: gradient per layer. Each ndarray has same shape as the corresponding weight matrix.
Source code in src/sc_neurocore/continual/engine.py
121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 | |
ewc_penalty()
¶
Compute EWC regularization penalty.
Source code in src/sc_neurocore/continual/engine.py
141 142 143 144 145 146 147 148 | |
register_task(accuracy)
¶
Register completion of a task.
Source code in src/sc_neurocore/continual/engine.py
150 151 152 153 | |
update_weights(new_weights)
¶
Update weights (e.g., after training on a new task).
Source code in src/sc_neurocore/continual/engine.py
155 156 157 | |
extract_plasticity_configs()
¶
Extract per-layer plasticity parameters for on-chip deployment.
Derives STDP parameters from weight statistics: - LR proportional to weight variance (active synapses learn faster) - Bounds from weight range - Homeostatic target from mean firing rate proxy
Source code in src/sc_neurocore/continual/engine.py
159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 | |
report()
¶
Generate a continual learning report.
Source code in src/sc_neurocore/continual/engine.py
188 189 190 191 192 193 194 195 196 197 | |
PlasticityConfig
dataclass
¶
Per-layer on-chip plasticity configuration.
Extracted from training for hardware deployment.
Parameters¶
layer_name : str rule : str Plasticity rule: 'stdp', 'r_stdp', 'homeostatic', 'none'. tau_pre : float Pre-synaptic trace time constant (ms). tau_post : float Post-synaptic trace time constant (ms). lr_potentiation : float Potentiation learning rate (A+). lr_depression : float Depression learning rate (A-). w_min : float Minimum weight. w_max : float Maximum weight. homeostatic_target : float Target firing rate for homeostatic regulation.
Source code in src/sc_neurocore/continual/engine.py
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 | |
ContinualReport
dataclass
¶
Report from a continual learning session.
Source code in src/sc_neurocore/continual/engine.py
67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 | |