Skip to content

Differential Privacy — Spike-Level DP

Spike-level differential privacy: add privacy noise at the spike domain instead of the gradient domain. Exploits the binary nature of spikes for more natural DP mechanisms.

Why Spike-Level DP?

Standard DP-SGD adds Gaussian noise to gradients (continuous, high-dimensional). For SNNs, spikes are already binary — we can use mechanisms designed for binary data:

Mechanism How It Works Privacy Cost
Randomized Response Flip each bit with probability p = 1/(1+e^ε) ε per bit
Poisson Subsampling Keep each spike with probability q = e^ε/(1+e^ε) ε per step

Components

  • SpikeLevelDP — Main DP mechanism.
Parameter Default Meaning
epsilon 1.0 Per-step privacy budget
mechanism "randomized_response" DP mechanism

Methods: privatize(spikes) — apply DP noise to a spike tensor.

  • PrivacyAccountant — Track cumulative privacy budget.
Parameter Default Meaning
target_epsilon 1.0 Total privacy budget
target_delta 1e-5 Failure probability

Properties: spent_epsilon, remaining_epsilon, budget_exhausted. Methods: record_step(step_epsilon), summary().

  • MembershipAudit — Audit SNN for membership inference vulnerability. Compares model confidence on training vs non-training samples. Returns accuracy (0.5 = no leakage, 1.0 = full leak), vulnerable flag if accuracy > 0.6.

Usage

from sc_neurocore.privacy.dp_snn import SpikeLevelDP, PrivacyAccountant, MembershipAudit
import numpy as np

# Apply DP to spike outputs
dp = SpikeLevelDP(epsilon=1.0, mechanism="randomized_response")
spikes = np.random.randint(0, 2, (100, 64)).astype(np.int8)
private_spikes = dp.privatize(spikes)

# Track privacy budget
accountant = PrivacyAccountant(target_epsilon=10.0)
for step in range(100):
    accountant.record_step(dp.per_step_epsilon)
    if accountant.budget_exhausted:
        print(f"Budget exhausted at step {step}")
        break
print(accountant.summary())

# Membership inference audit
def model_fn(x):
    return np.random.randn(10)  # your model here

auditor = MembershipAudit(run_fn=model_fn)
result = auditor.audit(member_samples, non_member_samples)
print(f"MI accuracy: {result['accuracy']:.2f}, vulnerable: {result['vulnerable']}")

See Tutorial 62: Differential Privacy.

sc_neurocore.privacy

Spike-level differential privacy: training and inference with privacy guarantees.

SpikeLevelDP

Spike-level differential privacy mechanism.

Adds stochastic spike noise to provide (epsilon, delta)-DP. Two mechanisms: - Spike randomized response: each spike independently flipped with probability p - Spike subsampling: randomly drop spikes with probability 1-q

Parameters

epsilon : float Per-step privacy budget. mechanism : str 'randomized_response' or 'subsampling'. seed : int

Source code in src/sc_neurocore/privacy/dp_snn.py
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
class SpikeLevelDP:
    """Spike-level differential privacy mechanism.

    Adds stochastic spike noise to provide (epsilon, delta)-DP.
    Two mechanisms:
    - Spike randomized response: each spike independently flipped with probability p
    - Spike subsampling: randomly drop spikes with probability 1-q

    Parameters
    ----------
    epsilon : float
        Per-step privacy budget.
    mechanism : str
        'randomized_response' or 'subsampling'.
    seed : int
    """

    def __init__(
        self, epsilon: float = 1.0, mechanism: str = "randomized_response", seed: int = 42
    ):
        self.epsilon = epsilon
        self.mechanism = mechanism
        self._rng = np.random.RandomState(seed)

        # Compute noise parameter from epsilon
        if mechanism == "randomized_response":
            # Randomized response: flip each bit with probability p = 1/(1+e^epsilon)
            self.flip_prob = 1.0 / (1.0 + np.exp(epsilon))
        elif mechanism == "subsampling":
            # Poisson subsampling: keep each spike with probability q = e^epsilon / (1+e^epsilon)
            self.keep_prob = np.exp(epsilon) / (1.0 + np.exp(epsilon))
        else:
            raise ValueError(f"Unknown mechanism '{mechanism}'")

    def privatize(self, spikes: np.ndarray) -> np.ndarray:
        """Apply DP mechanism to a spike tensor.

        Parameters
        ----------
        spikes : ndarray of shape (T, N) or (N,)
            Binary spike tensor.

        Returns
        -------
        ndarray, same shape
            Privatized spikes.
        """
        if self.mechanism == "randomized_response":
            flip_mask = self._rng.random(spikes.shape) < self.flip_prob
            privatized = spikes.copy().astype(np.int8)
            privatized[flip_mask] = 1 - privatized[flip_mask]
            return privatized
        else:
            keep_mask = self._rng.random(spikes.shape) < self.keep_prob
            return (spikes * keep_mask).astype(spikes.dtype)

    @property
    def per_step_epsilon(self) -> float:
        return self.epsilon

privatize(spikes)

Apply DP mechanism to a spike tensor.

Parameters

spikes : ndarray of shape (T, N) or (N,) Binary spike tensor.

Returns

ndarray, same shape Privatized spikes.

Source code in src/sc_neurocore/privacy/dp_snn.py
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
def privatize(self, spikes: np.ndarray) -> np.ndarray:
    """Apply DP mechanism to a spike tensor.

    Parameters
    ----------
    spikes : ndarray of shape (T, N) or (N,)
        Binary spike tensor.

    Returns
    -------
    ndarray, same shape
        Privatized spikes.
    """
    if self.mechanism == "randomized_response":
        flip_mask = self._rng.random(spikes.shape) < self.flip_prob
        privatized = spikes.copy().astype(np.int8)
        privatized[flip_mask] = 1 - privatized[flip_mask]
        return privatized
    else:
        keep_mask = self._rng.random(spikes.shape) < self.keep_prob
        return (spikes * keep_mask).astype(spikes.dtype)

PrivacyAccountant dataclass

Track cumulative privacy budget across training steps.

Uses simple composition theorem: total epsilon = sum of per-step epsilons. For tighter bounds, use Renyi DP (future extension).

Parameters

target_epsilon : float Privacy budget limit. target_delta : float Failure probability.

Source code in src/sc_neurocore/privacy/dp_snn.py
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
@dataclass
class PrivacyAccountant:
    """Track cumulative privacy budget across training steps.

    Uses simple composition theorem: total epsilon = sum of per-step epsilons.
    For tighter bounds, use Renyi DP (future extension).

    Parameters
    ----------
    target_epsilon : float
        Privacy budget limit.
    target_delta : float
        Failure probability.
    """

    target_epsilon: float = 1.0
    target_delta: float = 1e-5
    _spent_epsilon: float = 0.0
    _steps: int = 0

    def record_step(self, step_epsilon: float):
        """Record privacy cost of one training step."""
        self._spent_epsilon += step_epsilon
        self._steps += 1

    @property
    def spent_epsilon(self) -> float:
        return self._spent_epsilon

    @property
    def remaining_epsilon(self) -> float:
        return max(0.0, self.target_epsilon - self._spent_epsilon)

    @property
    def budget_exhausted(self) -> bool:
        return self._spent_epsilon >= self.target_epsilon

    def summary(self) -> str:
        return (
            f"Privacy: epsilon={self._spent_epsilon:.4f}/{self.target_epsilon} "
            f"({self._steps} steps), delta={self.target_delta}"
        )

record_step(step_epsilon)

Record privacy cost of one training step.

Source code in src/sc_neurocore/privacy/dp_snn.py
45
46
47
48
def record_step(self, step_epsilon: float):
    """Record privacy cost of one training step."""
    self._spent_epsilon += step_epsilon
    self._steps += 1

MembershipAudit

Audit SNN for membership inference vulnerability.

Given a trained model (as a callable), test whether it leaks information about training data membership. Uses shadow model methodology: compare model confidence on training vs non-training samples.

Parameters

run_fn : callable Model function: takes spikes (T, N) → output (N_out,).

Source code in src/sc_neurocore/privacy/dp_snn.py
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
class MembershipAudit:
    """Audit SNN for membership inference vulnerability.

    Given a trained model (as a callable), test whether it leaks
    information about training data membership. Uses shadow model
    methodology: compare model confidence on training vs non-training
    samples.

    Parameters
    ----------
    run_fn : callable
        Model function: takes spikes (T, N) → output (N_out,).
    """

    def __init__(self, run_fn):
        self.run_fn = run_fn

    def audit(
        self,
        member_samples: list[np.ndarray],
        non_member_samples: list[np.ndarray],
    ) -> dict:
        """Run membership inference audit.

        Parameters
        ----------
        member_samples : list of ndarray
            Samples known to be in the training set.
        non_member_samples : list of ndarray
            Samples known to NOT be in the training set.

        Returns
        -------
        dict with:
            - accuracy: membership inference accuracy (0.5 = no leakage, 1.0 = full leak)
            - member_confidence: mean output magnitude for members
            - non_member_confidence: mean output magnitude for non-members
            - vulnerable: bool, True if accuracy > 0.6
        """
        member_scores = [float(np.abs(self.run_fn(s)).mean()) for s in member_samples]
        non_member_scores = [float(np.abs(self.run_fn(s)).mean()) for s in non_member_samples]

        mean_member = float(np.mean(member_scores))
        mean_non = float(np.mean(non_member_scores))

        # Threshold-based inference: predict member if score > midpoint
        threshold = (mean_member + mean_non) / 2
        correct = 0
        total = len(member_scores) + len(non_member_scores)

        for s in member_scores:
            if s >= threshold:
                correct += 1
        for s in non_member_scores:
            if s < threshold:
                correct += 1

        accuracy = correct / max(total, 1)

        return {
            "accuracy": accuracy,
            "member_confidence": mean_member,
            "non_member_confidence": mean_non,
            "vulnerable": accuracy > 0.6,
        }

audit(member_samples, non_member_samples)

Run membership inference audit.

Parameters

member_samples : list of ndarray Samples known to be in the training set. non_member_samples : list of ndarray Samples known to NOT be in the training set.

Returns

dict with: - accuracy: membership inference accuracy (0.5 = no leakage, 1.0 = full leak) - member_confidence: mean output magnitude for members - non_member_confidence: mean output magnitude for non-members - vulnerable: bool, True if accuracy > 0.6

Source code in src/sc_neurocore/privacy/dp_snn.py
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
def audit(
    self,
    member_samples: list[np.ndarray],
    non_member_samples: list[np.ndarray],
) -> dict:
    """Run membership inference audit.

    Parameters
    ----------
    member_samples : list of ndarray
        Samples known to be in the training set.
    non_member_samples : list of ndarray
        Samples known to NOT be in the training set.

    Returns
    -------
    dict with:
        - accuracy: membership inference accuracy (0.5 = no leakage, 1.0 = full leak)
        - member_confidence: mean output magnitude for members
        - non_member_confidence: mean output magnitude for non-members
        - vulnerable: bool, True if accuracy > 0.6
    """
    member_scores = [float(np.abs(self.run_fn(s)).mean()) for s in member_samples]
    non_member_scores = [float(np.abs(self.run_fn(s)).mean()) for s in non_member_samples]

    mean_member = float(np.mean(member_scores))
    mean_non = float(np.mean(non_member_scores))

    # Threshold-based inference: predict member if score > midpoint
    threshold = (mean_member + mean_non) / 2
    correct = 0
    total = len(member_scores) + len(non_member_scores)

    for s in member_scores:
        if s >= threshold:
            correct += 1
    for s in non_member_scores:
        if s < threshold:
            correct += 1

    accuracy = correct / max(total, 1)

    return {
        "accuracy": accuracy,
        "member_confidence": mean_member,
        "non_member_confidence": mean_non,
        "vulnerable": accuracy > 0.6,
    }