Differential Privacy — Spike-Level DP¶
Spike-level differential privacy: add privacy noise at the spike domain instead of the gradient domain. Exploits the binary nature of spikes for more natural DP mechanisms.
Why Spike-Level DP?¶
Standard DP-SGD adds Gaussian noise to gradients (continuous, high-dimensional). For SNNs, spikes are already binary — we can use mechanisms designed for binary data:
| Mechanism | How It Works | Privacy Cost |
|---|---|---|
| Randomized Response | Flip each bit with probability p = 1/(1+e^ε) |
ε per bit |
| Poisson Subsampling | Keep each spike with probability q = e^ε/(1+e^ε) |
ε per step |
Components¶
SpikeLevelDP— Main DP mechanism.
| Parameter | Default | Meaning |
|---|---|---|
epsilon |
1.0 | Per-step privacy budget |
mechanism |
"randomized_response" | DP mechanism |
Methods: privatize(spikes) — apply DP noise to a spike tensor.
PrivacyAccountant— Track cumulative privacy budget.
| Parameter | Default | Meaning |
|---|---|---|
target_epsilon |
1.0 | Total privacy budget |
target_delta |
1e-5 | Failure probability |
Properties: spent_epsilon, remaining_epsilon, budget_exhausted. Methods: record_step(step_epsilon), summary().
MembershipAudit— Audit SNN for membership inference vulnerability. Compares model confidence on training vs non-training samples. Returns accuracy (0.5 = no leakage, 1.0 = full leak),vulnerableflag if accuracy > 0.6.
Usage¶
from sc_neurocore.privacy.dp_snn import SpikeLevelDP, PrivacyAccountant, MembershipAudit
import numpy as np
# Apply DP to spike outputs
dp = SpikeLevelDP(epsilon=1.0, mechanism="randomized_response")
spikes = np.random.randint(0, 2, (100, 64)).astype(np.int8)
private_spikes = dp.privatize(spikes)
# Track privacy budget
accountant = PrivacyAccountant(target_epsilon=10.0)
for step in range(100):
accountant.record_step(dp.per_step_epsilon)
if accountant.budget_exhausted:
print(f"Budget exhausted at step {step}")
break
print(accountant.summary())
# Membership inference audit
def model_fn(x):
return np.random.randn(10) # your model here
auditor = MembershipAudit(run_fn=model_fn)
result = auditor.audit(member_samples, non_member_samples)
print(f"MI accuracy: {result['accuracy']:.2f}, vulnerable: {result['vulnerable']}")
See Tutorial 62: Differential Privacy.
sc_neurocore.privacy
¶
Spike-level differential privacy: training and inference with privacy guarantees.
SpikeLevelDP
¶
Spike-level differential privacy mechanism.
Adds stochastic spike noise to provide (epsilon, delta)-DP. Two mechanisms: - Spike randomized response: each spike independently flipped with probability p - Spike subsampling: randomly drop spikes with probability 1-q
Parameters¶
epsilon : float Per-step privacy budget. mechanism : str 'randomized_response' or 'subsampling'. seed : int
Source code in src/sc_neurocore/privacy/dp_snn.py
69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 | |
privatize(spikes)
¶
Apply DP mechanism to a spike tensor.
Parameters¶
spikes : ndarray of shape (T, N) or (N,) Binary spike tensor.
Returns¶
ndarray, same shape Privatized spikes.
Source code in src/sc_neurocore/privacy/dp_snn.py
103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 | |
PrivacyAccountant
dataclass
¶
Track cumulative privacy budget across training steps.
Uses simple composition theorem: total epsilon = sum of per-step epsilons. For tighter bounds, use Renyi DP (future extension).
Parameters¶
target_epsilon : float Privacy budget limit. target_delta : float Failure probability.
Source code in src/sc_neurocore/privacy/dp_snn.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 | |
record_step(step_epsilon)
¶
Record privacy cost of one training step.
Source code in src/sc_neurocore/privacy/dp_snn.py
45 46 47 48 | |
MembershipAudit
¶
Audit SNN for membership inference vulnerability.
Given a trained model (as a callable), test whether it leaks information about training data membership. Uses shadow model methodology: compare model confidence on training vs non-training samples.
Parameters¶
run_fn : callable Model function: takes spikes (T, N) → output (N_out,).
Source code in src/sc_neurocore/privacy/dp_snn.py
130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 | |
audit(member_samples, non_member_samples)
¶
Run membership inference audit.
Parameters¶
member_samples : list of ndarray Samples known to be in the training set. non_member_samples : list of ndarray Samples known to NOT be in the training set.
Returns¶
dict with: - accuracy: membership inference accuracy (0.5 = no leakage, 1.0 = full leak) - member_confidence: mean output magnitude for members - non_member_confidence: mean output magnitude for non-members - vulnerable: bool, True if accuracy > 0.6
Source code in src/sc_neurocore/privacy/dp_snn.py
147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 | |