Skip to content

Neurons

Python-facing neuron models spanning classical integrate-and-fire dynamics, conductance-based cells, neural-mass models, maps, hardware-specific neurons, and differentiable training cells. Use the source tree and benchmark inventory as the authority for exact model and backend counts.

Quick Start

Python
# Flat import (any model)
from sc_neurocore.neurons import HodgkinHuxleyNeuron, AdExNeuron

# Individual file import
from sc_neurocore.neurons.models.hodgkin_huxley import HodgkinHuxleyNeuron

Core SC Neurons (bitstream-capable)

Class Domain
StochasticLIFNeuron Software simulation (fast)
FixedPointLIFNeuron Bit-true Q8.8 hardware model
HomeostaticLIFNeuron Self-regulating firing rate
SCIzhikevichNeuron Rich dynamics (bursting, chattering)
StochasticDendriticNeuron XOR dendritic processing

sc_neurocore.neurons.base.BaseNeuron

Bases: ABC

Abstract base class for stochastic neuron models.

All neurons should expose: - step(input_current) -> spike (0 or 1) - reset_state() - get_state() -> dict

Source code in src/sc_neurocore/neurons/base.py
Python
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
class BaseNeuron(ABC):
    """
    Abstract base class for stochastic neuron models.

    All neurons should expose:
    - step(input_current) -> spike (0 or 1)
    - reset_state()
    - get_state() -> dict
    """

    @abstractmethod
    def step(self, input_current: float) -> int:
        """Advance the neuron by one time step and return a spike (0 or 1)."""
        raise NotImplementedError

    @abstractmethod
    def reset_state(self) -> None:
        """Reset the internal state to default / initial values."""
        raise NotImplementedError

    @abstractmethod
    def get_state(self) -> dict[str, Any]:
        """Return a dict with the internal state (e.g., membrane potential)."""
        raise NotImplementedError

step(input_current) abstractmethod

Advance the neuron by one time step and return a spike (0 or 1).

Source code in src/sc_neurocore/neurons/base.py
Python
23
24
25
26
@abstractmethod
def step(self, input_current: float) -> int:
    """Advance the neuron by one time step and return a spike (0 or 1)."""
    raise NotImplementedError

reset_state() abstractmethod

Reset the internal state to default / initial values.

Source code in src/sc_neurocore/neurons/base.py
Python
28
29
30
31
@abstractmethod
def reset_state(self) -> None:
    """Reset the internal state to default / initial values."""
    raise NotImplementedError

get_state() abstractmethod

Return a dict with the internal state (e.g., membrane potential).

Source code in src/sc_neurocore/neurons/base.py
Python
33
34
35
36
@abstractmethod
def get_state(self) -> dict[str, Any]:
    """Return a dict with the internal state (e.g., membrane potential)."""
    raise NotImplementedError

sc_neurocore.neurons.stochastic_lif.StochasticLIFNeuron dataclass

Bases: BaseNeuron

Discrete-time noisy leaky integrate-and-fire neuron.

dv/dt = -(v - v_rest) / tau_mem + R * I + noise

Parameters use normalised units (voltage [0,1], time in ms). Defaults from Gerstner & Kistler, Spiking Neuron Models, 2002.

Example

neuron = StochasticLIFNeuron(v_threshold=1.0, tau_mem=20.0, noise_std=0.0) spikes = [neuron.step(1.5) for _ in range(50)] sum(spikes) > 0 True neuron.get_state() # membrane voltage + refractory counter

Process a bitstream as input current:

import numpy as np bits = np.array([1, 0, 1, 1, 0, 1, 0, 0], dtype=np.uint8) neuron.reset_state() out = neuron.process_bitstream(bits, input_scale=2.0) out.shape (8,)

Source code in src/sc_neurocore/neurons/stochastic_lif.py
Python
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
@dataclass
class StochasticLIFNeuron(BaseNeuron):
    """
    Discrete-time noisy leaky integrate-and-fire neuron.

    dv/dt = -(v - v_rest) / tau_mem + R * I + noise

    Parameters use normalised units (voltage [0,1], time in ms).
    Defaults from Gerstner & Kistler, *Spiking Neuron Models*, 2002.

    Example
    -------
    >>> neuron = StochasticLIFNeuron(v_threshold=1.0, tau_mem=20.0, noise_std=0.0)
    >>> spikes = [neuron.step(1.5) for _ in range(50)]
    >>> sum(spikes) > 0
    True
    >>> neuron.get_state()  # membrane voltage + refractory counter
    {'v': ..., 'refractory': 0}

    Process a bitstream as input current:

    >>> import numpy as np
    >>> bits = np.array([1, 0, 1, 1, 0, 1, 0, 0], dtype=np.uint8)
    >>> neuron.reset_state()
    >>> out = neuron.process_bitstream(bits, input_scale=2.0)
    >>> out.shape
    (8,)
    """

    v_rest: float = LIF_V_REST
    v_reset: float = LIF_V_RESET
    v_threshold: float = LIF_V_THRESHOLD
    tau_mem: float = LIF_TAU_MEM
    dt: float = LIF_DT
    noise_std: float = LIF_NOISE_STD
    resistance: float = LIF_RESISTANCE
    refractory_period: int = LIF_REFRACTORY_PERIOD
    seed: int | None = None
    entropy_source: Any | None = None  # Optional external entropy (e.g. Quantum)

    def __post_init__(self) -> None:
        if not np.isfinite(self.v_rest):
            raise ValueError("v_rest must be finite")
        if not np.isfinite(self.v_reset):
            raise ValueError("v_reset must be finite")
        if not np.isfinite(self.v_threshold):
            raise ValueError("v_threshold must be finite")
        if not np.isfinite(self.tau_mem) or self.tau_mem <= 0:
            raise ValueError(f"tau_mem must be > 0, got {self.tau_mem}")
        if not np.isfinite(self.dt) or self.dt <= 0:
            raise ValueError(f"dt must be > 0, got {self.dt}")
        if not np.isfinite(self.noise_std) or self.noise_std < 0:
            raise ValueError(f"noise_std must be >= 0, got {self.noise_std}")
        if not np.isfinite(self.resistance):
            raise ValueError("resistance must be finite")
        if self.refractory_period < 0:
            raise ValueError("refractory_period must be non-negative")
        self._rng = RNG(self.seed)
        self.v = self.v_rest
        self.refractory_counter = 0
        self.reset_state()

    def step(self, input_current: float) -> int:
        if not np.isfinite(input_current):
            raise ValueError("input_current must be finite")
        if self.refractory_counter > 0:
            self.refractory_counter -= 1
            self.v = self.v_rest
            return 0

        # Membrane leak term
        dv_leak = -(self.v - self.v_rest) * (self.dt / self.tau_mem)

        # Input term (simple Ohm's law; you can absorb R into current)
        dv_input = self.resistance * input_current * self.dt

        # Noise term (Euler-Maruyama: sigma * sqrt(dt) * N(0,1))
        dv_noise = 0.0
        if self.noise_std > 0.0:
            sqrt_dt = self.dt**0.5
            if self.entropy_source is not None:
                dv_noise = float(self.entropy_source.sample_normal(0.0, self.noise_std * sqrt_dt))
            else:
                dv_noise = float(self._rng.normal(0.0, self.noise_std * sqrt_dt))

        # Update membrane potential
        self.v += dv_leak + dv_input + dv_noise

        # Check for spike
        if self.v >= self.v_threshold:
            spike = 1
            self.v = self.v_reset
            self.refractory_counter = self.refractory_period
        else:
            spike = 0
        return spike

    def reset_state(self) -> None:
        self.v = self.v_rest
        self.refractory_counter = 0

    def get_state(self) -> dict[str, Any]:
        return {"v": float(self.v), "refractory": self.refractory_counter}

    def process_bitstream(
        self, input_bits: npt.ArrayLike, input_scale: float = 1.0
    ) -> npt.NDArray[np.uint8]:
        """
        Process a bitstream (array of 0s and 1s) as input current.
        Returns an array of spikes (0s and 1s).

        input_scale: scaling factor to convert bit (0/1) to current amplitude.
        """
        bits = np.asarray(input_bits)
        if bits.ndim != 1:
            raise ValueError("input_bits must be a one-dimensional bitstream")
        if not np.all(np.isfinite(bits)):
            raise ValueError("input_bits must contain only finite values")
        if np.any((bits != 0) & (bits != 1)):
            raise ValueError("input_bits must contain only binary 0/1 values")
        if not np.isfinite(input_scale):
            raise ValueError("input_scale must be finite")

        spikes = np.zeros_like(bits, dtype=np.uint8)
        for i, bit in enumerate(bits):
            # Treat bit as current pulse of amplitude 'input_scale'
            current = bit * input_scale
            spikes[i] = self.step(current)
        return spikes

process_bitstream(input_bits, input_scale=1.0)

Process a bitstream (array of 0s and 1s) as input current. Returns an array of spikes (0s and 1s).

input_scale: scaling factor to convert bit (0/1) to current amplitude.

Source code in src/sc_neurocore/neurons/stochastic_lif.py
Python
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
def process_bitstream(
    self, input_bits: npt.ArrayLike, input_scale: float = 1.0
) -> npt.NDArray[np.uint8]:
    """
    Process a bitstream (array of 0s and 1s) as input current.
    Returns an array of spikes (0s and 1s).

    input_scale: scaling factor to convert bit (0/1) to current amplitude.
    """
    bits = np.asarray(input_bits)
    if bits.ndim != 1:
        raise ValueError("input_bits must be a one-dimensional bitstream")
    if not np.all(np.isfinite(bits)):
        raise ValueError("input_bits must contain only finite values")
    if np.any((bits != 0) & (bits != 1)):
        raise ValueError("input_bits must contain only binary 0/1 values")
    if not np.isfinite(input_scale):
        raise ValueError("input_scale must be finite")

    spikes = np.zeros_like(bits, dtype=np.uint8)
    for i, bit in enumerate(bits):
        # Treat bit as current pulse of amplitude 'input_scale'
        current = bit * input_scale
        spikes[i] = self.step(current)
    return spikes

sc_neurocore.neurons.fixed_point_lif.FixedPointLIFNeuron dataclass

Bit-true fixed-point model of the Verilog sc_lif_neuron.

All arithmetic is performed in signed Q(FRACTION) fixed-point with explicit bit-width masking so that overflow/wrap behaviour matches the hardware exactly.

Parameters

data_width : int Total bit width of all fixed-point values (default 16). fraction : int Number of fractional bits (default 8, giving Q8.8). v_rest, v_reset, v_threshold : int Membrane parameters in Q(FRACTION) fixed-point. refractory_period : int Number of clock cycles to hold after a spike.

Example

neuron = FixedPointLIFNeuron() spike, v = neuron.step(leak_k=240, gain_k=16, I_t=100) spike in (0, 1) True neuron.reset()

Source code in src/sc_neurocore/neurons/fixed_point_lif.py
Python
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
@dataclass
class FixedPointLIFNeuron:
    """
    Bit-true fixed-point model of the Verilog ``sc_lif_neuron``.

    All arithmetic is performed in signed Q(FRACTION) fixed-point with
    explicit bit-width masking so that overflow/wrap behaviour matches
    the hardware exactly.

    Parameters
    ----------
    data_width : int
        Total bit width of all fixed-point values (default 16).
    fraction : int
        Number of fractional bits (default 8, giving Q8.8).
    v_rest, v_reset, v_threshold : int
        Membrane parameters in Q(FRACTION) fixed-point.
    refractory_period : int
        Number of clock cycles to hold after a spike.

    Example
    -------
    >>> neuron = FixedPointLIFNeuron()
    >>> spike, v = neuron.step(leak_k=240, gain_k=16, I_t=100)
    >>> spike in (0, 1)
    True
    >>> neuron.reset()
    """

    data_width: int = FP_DATA_WIDTH
    fraction: int = FP_FRACTION
    v_rest: int = 0
    v_reset: int = 0
    v_threshold: int = FP_V_THRESHOLD
    refractory_period: int = FP_REFRACTORY_PERIOD

    def __post_init__(self) -> None:
        if not 1 <= self.data_width <= 32:
            raise ValueError(f"data_width must be in [1, 32], got {self.data_width}")
        if not 0 <= self.fraction < self.data_width:
            raise ValueError(f"fraction must be in [0, data_width), got {self.fraction}")
        if self.refractory_period < 0:
            raise ValueError(f"refractory_period must be >= 0, got {self.refractory_period}")
        self.v: int = self.v_rest
        self.refractory_counter: int = 0

    def step(self, leak_k: int, gain_k: int, I_t: int, noise_in: int = 0) -> tuple[int, int]:
        """
        Execute one clock cycle — bit-true match to Verilog RTL.

        Parameters
        ----------
        leak_k : int   – ALPHA_LEAK in Q(FRACTION)
        gain_k : int   – GAIN_IN in Q(FRACTION)
        I_t    : int   – Input current in Q(FRACTION)
        noise_in : int – External noise in Q(FRACTION)

        Returns
        -------
        (spike, v_out) : tuple[int, int]
        """
        W = self.data_width

        if self.refractory_counter > 0:
            self.refractory_counter -= 1
            self.v = self.v_rest
            return 0, _mask(self.v, W)

        # --- Leak term: (V_REST - v) * leak_k >>> FRACTION ---
        diff = _mask(self.v_rest - self.v, 2 * W)
        leak_mul = diff * leak_k
        # Arithmetic right shift (Python >> is arithmetic for negative ints)
        dv_leak = leak_mul >> self.fraction

        # --- Input term: I_t * gain_k >>> FRACTION ---
        in_mul = I_t * gain_k
        dv_in = in_mul >> self.fraction

        # --- Next membrane potential ---
        v_next = _mask(self.v + dv_leak + dv_in + noise_in, W)

        # --- Threshold check ---
        if v_next >= self.v_threshold:
            spike = 1
            self.v = self.v_reset
            self.refractory_counter = self.refractory_period
        else:
            spike = 0
            self.v = v_next

        return spike, _mask(self.v, W)

    def reset(self) -> None:
        """Reset neuron state to power-on defaults."""
        self.v = self.v_rest
        self.refractory_counter = 0

    # Aliases for BaseNeuron-compatible interface
    def reset_state(self) -> None:
        """Reset internal state (alias for :meth:`reset`)."""
        self.reset()

    def get_state(self) -> Dict[str, Any]:
        """Return dict with internal state."""
        return {
            "v": self.v,
            "refractory_counter": self.refractory_counter,
        }

step(leak_k, gain_k, I_t, noise_in=0)

Execute one clock cycle — bit-true match to Verilog RTL.

Parameters

leak_k : int – ALPHA_LEAK in Q(FRACTION) gain_k : int – GAIN_IN in Q(FRACTION) I_t : int – Input current in Q(FRACTION) noise_in : int – External noise in Q(FRACTION)

Returns

(spike, v_out) : tuple[int, int]

Source code in src/sc_neurocore/neurons/fixed_point_lif.py
Python
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
def step(self, leak_k: int, gain_k: int, I_t: int, noise_in: int = 0) -> tuple[int, int]:
    """
    Execute one clock cycle — bit-true match to Verilog RTL.

    Parameters
    ----------
    leak_k : int   – ALPHA_LEAK in Q(FRACTION)
    gain_k : int   – GAIN_IN in Q(FRACTION)
    I_t    : int   – Input current in Q(FRACTION)
    noise_in : int – External noise in Q(FRACTION)

    Returns
    -------
    (spike, v_out) : tuple[int, int]
    """
    W = self.data_width

    if self.refractory_counter > 0:
        self.refractory_counter -= 1
        self.v = self.v_rest
        return 0, _mask(self.v, W)

    # --- Leak term: (V_REST - v) * leak_k >>> FRACTION ---
    diff = _mask(self.v_rest - self.v, 2 * W)
    leak_mul = diff * leak_k
    # Arithmetic right shift (Python >> is arithmetic for negative ints)
    dv_leak = leak_mul >> self.fraction

    # --- Input term: I_t * gain_k >>> FRACTION ---
    in_mul = I_t * gain_k
    dv_in = in_mul >> self.fraction

    # --- Next membrane potential ---
    v_next = _mask(self.v + dv_leak + dv_in + noise_in, W)

    # --- Threshold check ---
    if v_next >= self.v_threshold:
        spike = 1
        self.v = self.v_reset
        self.refractory_counter = self.refractory_period
    else:
        spike = 0
        self.v = v_next

    return spike, _mask(self.v, W)

reset()

Reset neuron state to power-on defaults.

Source code in src/sc_neurocore/neurons/fixed_point_lif.py
Python
126
127
128
129
def reset(self) -> None:
    """Reset neuron state to power-on defaults."""
    self.v = self.v_rest
    self.refractory_counter = 0

reset_state()

Reset internal state (alias for :meth:reset).

Source code in src/sc_neurocore/neurons/fixed_point_lif.py
Python
132
133
134
def reset_state(self) -> None:
    """Reset internal state (alias for :meth:`reset`)."""
    self.reset()

get_state()

Return dict with internal state.

Source code in src/sc_neurocore/neurons/fixed_point_lif.py
Python
136
137
138
139
140
141
def get_state(self) -> Dict[str, Any]:
    """Return dict with internal state."""
    return {
        "v": self.v,
        "refractory_counter": self.refractory_counter,
    }

sc_neurocore.neurons.sc_izhikevich.SCIzhikevichNeuron dataclass

Bases: BaseNeuron

Stochastic Izhikevich neuron (software-only).

Standard Izhikevich model (IEEE TNN 14(6), 2003): v' = 0.04v^2 + 5v + 140 - u + I + noise u' = a(bv - u)

When v >= 30 mV: spike, then v <- c, u <- u + d.

Example

neuron = SCIzhikevichNeuron(noise_std=0.0) spikes = [neuron.step(10.0) for _ in range(100)] sum(spikes) > 0 # regular spiking with I=10 True

Integrator options: - baseline_half_euler preserves the historical two-half-step path - rk4 is an explicit higher-order alternative path

Source code in src/sc_neurocore/neurons/sc_izhikevich.py
Python
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
@dataclass
class SCIzhikevichNeuron(BaseNeuron):
    """
    Stochastic Izhikevich neuron (software-only).

    Standard Izhikevich model (IEEE TNN 14(6), 2003):
    v' = 0.04*v^2 + 5*v + 140 - u + I + noise
    u' = a*(b*v - u)

    When v >= 30 mV: spike, then v <- c, u <- u + d.

    Example
    -------
    >>> neuron = SCIzhikevichNeuron(noise_std=0.0)
    >>> spikes = [neuron.step(10.0) for _ in range(100)]
    >>> sum(spikes) > 0  # regular spiking with I=10
    True

    Integrator options:
    - ``baseline_half_euler`` preserves the historical two-half-step path
    - ``rk4`` is an explicit higher-order alternative path
    """

    a: float = IZH_A
    b: float = IZH_B
    c: float = IZH_C
    d: float = IZH_D
    dt: float = LIF_DT
    noise_std: float = 0.0
    seed: int | None = None
    integrator: Literal["baseline_half_euler", "rk4"] = "baseline_half_euler"

    def __post_init__(self) -> None:
        if self.integrator not in {"baseline_half_euler", "rk4"}:
            raise ValueError(f"Unsupported integrator for SCIzhikevichNeuron: {self.integrator}")
        for name in ("a", "b", "c", "d"):
            self._require_finite(name, getattr(self, name))
        self.dt = self._require_positive("dt", self.dt)
        self.noise_std = self._require_nonnegative("noise_std", self.noise_std)
        self._rng = RNG(self.seed)
        self.v: float = self.c
        self.u: float = self.b * self.c
        self.reset_state()

    @staticmethod
    def _require_finite(name: str, value: float) -> float:
        if not isinstance(value, int | float) or not math.isfinite(float(value)):
            raise ValueError(f"{name} must be finite")
        return float(value)

    @classmethod
    def _require_positive(cls, name: str, value: float) -> float:
        result = cls._require_finite(name, value)
        if result <= 0.0:
            raise ValueError(f"{name} must be positive")
        return result

    @classmethod
    def _require_nonnegative(cls, name: str, value: float) -> float:
        result = cls._require_finite(name, value)
        if result < 0.0:
            raise ValueError(f"{name} must be non-negative")
        return result

    def step(self, input_current: float) -> int:
        input_current = self._require_finite("input_current", input_current)
        if self.integrator == "baseline_half_euler":
            return self._step_baseline_half_euler(input_current)
        return self._step_rk4(input_current)

    def _rhs(self, v: float, u: float, input_current: float) -> tuple[float, float]:
        dv = 0.04 * v**2 + 5.0 * v + 140.0 - u + input_current
        du = self.a * (self.b * v - u)
        return dv, du

    def _apply_noise_and_threshold(self) -> int:
        if self.noise_std > 0.0:
            self.v += float(self._rng.normal(0.0, self.noise_std))

        if self.v >= IZH_SPIKE_THRESHOLD:
            self.v = self.c
            self.u += self.d
            return 1
        return 0

    def _step_baseline_half_euler(self, input_current: float) -> int:
        # Two half-steps for numerical stability on 0.04v² term.
        # Izhikevich (2003) recommends dt ≤ 0.5 ms; we split each dt into two.
        half_dt = self.dt * 0.5
        for _ in range(2):
            dv, du = self._rhs(self.v, self.u, input_current)
            dv *= half_dt
            du *= half_dt
            self.v += dv
            self.u += du
        return self._apply_noise_and_threshold()

    def _step_rk4(self, input_current: float) -> int:
        state = np.array([self.v, self.u], dtype=np.float64)

        def rhs(state_vec: npt.NDArray[np.float64]) -> npt.NDArray[np.float64]:
            dv, du = self._rhs(float(state_vec[0]), float(state_vec[1]), input_current)
            return np.array([dv, du], dtype=np.float64)

        k1 = rhs(state)
        k2 = rhs(state + 0.5 * self.dt * k1)
        k3 = rhs(state + 0.5 * self.dt * k2)
        k4 = rhs(state + self.dt * k3)
        state = state + (self.dt / 6.0) * (k1 + 2.0 * k2 + 2.0 * k3 + k4)
        self.v = float(state[0])
        self.u = float(state[1])
        return self._apply_noise_and_threshold()

    def reset_state(self) -> None:
        self.v = self.c  # membrane potential
        self.u = self.b * self.v  # recovery variable

    def get_state(self) -> dict[str, Any]:
        return {"v": float(self.v), "u": float(self.u)}

sc_neurocore.neurons.homeostatic_lif.HomeostaticLIFNeuron dataclass

Bases: StochasticLIFNeuron

LIF neuron with homeostatic threshold adaptation.

Self-regulates firing rate toward a target setpoint via exponential moving average of spike rate. Based on Turrigiano (2012).

Example

neuron = HomeostaticLIFNeuron(target_rate=0.1, noise_std=0.0) for _ in range(200): ... neuron.step(1.5) neuron.v_threshold != 1.0 # threshold adapted True

Source code in src/sc_neurocore/neurons/homeostatic_lif.py
Python
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
@dataclass
class HomeostaticLIFNeuron(StochasticLIFNeuron):
    """
    LIF neuron with homeostatic threshold adaptation.

    Self-regulates firing rate toward a target setpoint via exponential
    moving average of spike rate. Based on Turrigiano (2012).

    Example
    -------
    >>> neuron = HomeostaticLIFNeuron(target_rate=0.1, noise_std=0.0)
    >>> for _ in range(200):
    ...     neuron.step(1.5)
    >>> neuron.v_threshold != 1.0  # threshold adapted
    True
    """

    target_rate: float = HOMEOSTATIC_TARGET_RATE
    adaptation_rate: float = HOMEOSTATIC_ADAPTATION_RATE
    rate_trace: float = 0.0
    trace_decay: float = HOMEOSTATIC_TRACE_DECAY

    def __post_init__(self) -> None:
        super().__post_init__()
        self.initial_threshold: float = self.v_threshold

    def step(self, input_current: float) -> int:
        spike = super().step(input_current)

        self.rate_trace = self.rate_trace * self.trace_decay + spike * (1.0 - self.trace_decay)

        error = self.rate_trace - self.target_rate
        self.v_threshold += self.adaptation_rate * error
        self.v_threshold = max(
            THRESHOLD_FLOOR,
            min(self.v_threshold, self.initial_threshold * THRESHOLD_CEILING_MULT),
        )

        return spike

    def get_state(self) -> Dict[str, Any]:
        s = super().get_state()
        s["threshold"] = float(self.v_threshold)
        s["rate_trace"] = float(self.rate_trace)
        return s

sc_neurocore.neurons.dendritic.StochasticDendriticNeuron dataclass

XOR-nonlinearity neuron with shunting inhibition.

Implements d1 + d2 - 2*d1*d2 (XOR truth table for binary inputs). Based on Koch, Biophysics of Computation, 1999, Ch. 12.

Source code in src/sc_neurocore/neurons/dendritic.py
Python
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
@dataclass
class StochasticDendriticNeuron:
    """
    XOR-nonlinearity neuron with shunting inhibition.

    Implements ``d1 + d2 - 2*d1*d2`` (XOR truth table for binary inputs).
    Based on Koch, *Biophysics of Computation*, 1999, Ch. 12.
    """

    threshold: float = DENDRITIC_THRESHOLD
    _last_current: float = field(default=0.0, init=False, repr=False)

    def step(self, input_a: float, input_b: float) -> int:
        d1 = input_a
        d2 = input_b

        # XOR nonlinearity: d1 + d2 - 2*d1*d2
        current = d1 + d2 - 2.0 * (d1 * d2)

        self._last_current = current
        if current > self.threshold:
            return 1
        return 0

    def reset_state(self) -> None:
        """Reset internal state to defaults."""
        self._last_current = 0.0

    def get_state(self) -> Dict[str, Any]:
        """Return dict with internal state."""
        return {"last_current": self._last_current, "threshold": self.threshold}

reset_state()

Reset internal state to defaults.

Source code in src/sc_neurocore/neurons/dendritic.py
Python
41
42
43
def reset_state(self) -> None:
    """Reset internal state to defaults."""
    self._last_current = 0.0

get_state()

Return dict with internal state.

Source code in src/sc_neurocore/neurons/dendritic.py
Python
45
46
47
def get_state(self) -> Dict[str, Any]:
    """Return dict with internal state."""
    return {"last_current": self._last_current, "threshold": self.threshold}

Extended Model Library (neurons/models/)

Integrate-and-Fire Variants (21)

Model File Reference
AdEx adex.py Brette & Gerstner 2005
ExpIF expif.py Fourcaud-Trocme 2003
Lapicque lapicque.py Lapicque 1907
QIF quadratic_if.py Latham 2000
GLIF (5 levels) glif.py Teeter 2018, Allen Institute
MAT mat.py Kobayashi 2009
SFA sfa.py Benda & Herz 2003
Stochastic IF stochastic_if.py Brunel & Hakim 1999
Escape-rate escape_rate.py Gerstner 2000
Fractional LIF fractional_lif.py Lundstrom 2008
COBA LIF coba_lif.py Conductance-based
Perfect Integrator perfect_integrator.py Non-leaky IF
NLIF nlif.py Cubic nonlinearity
Adaptive Threshold adaptive_threshold_if.py Dynamic threshold
PLIF plif.py Fang 2021, learnable tau
Non-Resetting LIF non_resetting_lif.py Kobayashi 2009
Gated LIF gated_lif.py Yao 2022, NeurIPS
Sigma-Delta sigma_delta.py Yoon 2017
TC-LIF tc_lif.py AAAI 2024
Benda-Herz benda_herz.py Benda 2003
Integer QIF iqif.py Lo 2021, fixed-point
Complementary LIF clif.py ICML 2024, dual paths
K-LIF klif.py Learnable scaling
Inhibitory LIF ilif.py 2025, temporal inhibition
E-prop ALIF e_prop_alif.py Bellec 2020, eligibility
Izhikevich 2007 izhikevich2007.py Izhikevich 2007 biophysical
Energy LIF energy_lif.py Fardet 2020

Biophysical / Conductance-Based (11)

Model File Reference
Hodgkin-Huxley hodgkin_huxley.py HH 1952 (Nobel Prize)
Connor-Stevens connor_stevens.py Connor 1977, A-type K+
Wang-Buzsaki wang_buzsaki.py Wang 1996, FS interneuron
Pinsky-Rinzel pinsky_rinzel.py Pinsky 1994, 2-compartment
Destexhe destexhe_thalamic.py Destexhe 1993, T-current
Huber-Braun huber_braun.py Braun 1998, cold receptor
Gutkin-Ermentrout gutkin_ermentrout.py Gutkin 1998
Traub-Miles traub_miles.py Traub 1991, hippocampal
Golomb FS golomb_fs.py Golomb 2007, Kv3 channels
Mainen-Sejnowski mainen_sejnowski.py Mainen 1996, axonal Na
Pospischil pospischil.py Pospischil 2008, 5 types

Oscillatory / Qualitative (7)

Model File Reference
FitzHugh-Nagumo fitzhugh_nagumo.py FitzHugh 1961
Morris-Lecar morris_lecar.py Morris 1981
Hindmarsh-Rose hindmarsh_rose.py HR 1984, chaotic bursting
Resonate-and-Fire resonate_and_fire.py Izhikevich 2001
Balanced Resonate-and-Fire balanced_resonate_and_fire.py Higuchi et al. 2024
Theta theta.py Ermentrout 1986
FitzHugh-Rinzel fitzhugh_rinzel.py FitzHugh 1976, 3D
Terman-Wang terman_wang.py Terman 1995, LEGION

Bursting (5)

Model File Reference
Chay chay.py Chay 1985, pancreatic beta
Butera butera_respiratory.py Butera 1999, respiratory
Sherman-Rinzel-Keizer sherman_rinzel_keizer.py Sherman 1988
Plant R15 plant_r15.py Plant 1981, Aplysia
Bertram Phantom bertram_phantom.py Bertram 2008
Pernarowski pernarowski.py Pernarowski 1994

Multi-Compartment (4)

Model File Reference
Hay L5 Pyramidal hay_l5.py Hay 2011, 3-compartment BAC firing
Booth-Rinzel booth_rinzel.py Booth 1995, bistable motoneuron
Dendrify dendrify.py Beniaguev 2022, active dendrite
TC-LIF tc_lif.py AAAI 2024, soma+dendrite

Synaptic (3)

Alpha, Synaptic (dual-exp), Tsodyks-Markram (STP)

Map-Based / Discrete (6)

Rulkov, Chialvo, Courbage-Nekorkin, Medvedev, Ibarz-Tanaka, Cazelles

Stochastic (4)

Poisson, Inhomogeneous Poisson, Galves-Locherbach, GLM (Pillow 2008)

Population / Neural Mass (7)

Wilson-Cowan, Jansen-Rit (EEG), Wong-Wang (decision), Ermentrout-Kopell (exact mean-field), Amari (neural field), Wendling (extended JR, epilepsy EEG), Larter-Breakspear (TVB whole-brain)

Hardware-Specific (9)

Loihi CUBA, Loihi 2, TrueNorth, BrainScaleS AdEx, SpiNNaker LIF, SpiNNaker2, DPI/DYNAP-SE, Akida, Sigma-Delta

Rate Models (3)

McCulloch-Pitts (1943), Sigmoid Rate, Threshold-Linear (ReLU)

Other (5)

SRM/SRM0 (kernel), McKean (piecewise FHN), Leaky-Compete-Fire (WTA), Prescott (Type I/II/III), Compte (NMDA working memory)

Multi-Compartment (3)

Pinsky-Rinzel (2-comp), Booth-Rinzel (motoneuron), TC-LIF (soma+dendrite)

PyTorch Training Cells (10)

Differentiable spiking neurons for surrogate gradient training:

Cell Module Reference
LIFCell training.snn_modules Standard LIF
IFCell training.snn_modules No leak
SynapticCell training.snn_modules Dual-exponential
ALIFCell training.snn_modules Bellec 2020
RecurrentLIFCell training.snn_modules Orthogonal init
ExpIFCell training.snn_modules Exponential
AdExCell training.snn_modules Adaptive exponential
LapicqueCell training.snn_modules RC circuit
AlphaCell training.snn_modules Alpha synapse
SecondOrderLIFCell training.snn_modules Inertial term