Skip to content

Neurons — 122 Models

122 neuron models spanning 83 years of computational neuroscience (1943-2026): 109 individual bio model files under neurons/models/ plus 5 core stochastic computing neurons plus 9 AI-optimized models (ArcaneNeuron + 8 novel designs in ai_optimized.py).

Quick Start

# Flat import (any model)
from sc_neurocore.neurons import HodgkinHuxleyNeuron, AdExNeuron

# Individual file import
from sc_neurocore.neurons.models.hodgkin_huxley import HodgkinHuxleyNeuron

Core SC Neurons (bitstream-capable)

Class Domain
StochasticLIFNeuron Software simulation (fast)
FixedPointLIFNeuron Bit-true Q8.8 hardware model
HomeostaticLIFNeuron Self-regulating firing rate
SCIzhikevichNeuron Rich dynamics (bursting, chattering)
StochasticDendriticNeuron XOR dendritic processing

sc_neurocore.neurons.base.BaseNeuron

Bases: ABC

Abstract base class for stochastic neuron models.

All neurons should expose: - step(input_current) -> spike (0 or 1) - reset_state() - get_state() -> dict

Source code in src/sc_neurocore/neurons/base.py
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
class BaseNeuron(ABC):
    """
    Abstract base class for stochastic neuron models.

    All neurons should expose:
    - step(input_current) -> spike (0 or 1)
    - reset_state()
    - get_state() -> dict
    """

    @abstractmethod
    def step(self, input_current: float) -> int:
        """Advance the neuron by one time step and return a spike (0 or 1)."""
        raise NotImplementedError

    @abstractmethod
    def reset_state(self) -> None:
        """Reset the internal state to default / initial values."""
        raise NotImplementedError

    @abstractmethod
    def get_state(self) -> Dict[str, Any]:
        """Return a dict with the internal state (e.g., membrane potential)."""
        raise NotImplementedError

step(input_current) abstractmethod

Advance the neuron by one time step and return a spike (0 or 1).

Source code in src/sc_neurocore/neurons/base.py
22
23
24
25
@abstractmethod
def step(self, input_current: float) -> int:
    """Advance the neuron by one time step and return a spike (0 or 1)."""
    raise NotImplementedError

reset_state() abstractmethod

Reset the internal state to default / initial values.

Source code in src/sc_neurocore/neurons/base.py
27
28
29
30
@abstractmethod
def reset_state(self) -> None:
    """Reset the internal state to default / initial values."""
    raise NotImplementedError

get_state() abstractmethod

Return a dict with the internal state (e.g., membrane potential).

Source code in src/sc_neurocore/neurons/base.py
32
33
34
35
@abstractmethod
def get_state(self) -> Dict[str, Any]:
    """Return a dict with the internal state (e.g., membrane potential)."""
    raise NotImplementedError

sc_neurocore.neurons.stochastic_lif.StochasticLIFNeuron dataclass

Bases: BaseNeuron

Discrete-time noisy leaky integrate-and-fire neuron.

dv/dt = -(v - v_rest) / tau_mem + R * I + noise

Parameters use normalised units (voltage [0,1], time in ms). Defaults from Gerstner & Kistler, Spiking Neuron Models, 2002.

Example

neuron = StochasticLIFNeuron(v_threshold=1.0, tau_mem=20.0, noise_std=0.0) spikes = [neuron.step(1.5) for _ in range(50)] sum(spikes) > 0 True neuron.get_state() # membrane voltage + refractory counter

Process a bitstream as input current:

import numpy as np bits = np.array([1, 0, 1, 1, 0, 1, 0, 0], dtype=np.uint8) neuron.reset_state() out = neuron.process_bitstream(bits, input_scale=2.0) out.shape (8,)

Source code in src/sc_neurocore/neurons/stochastic_lif.py
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
@dataclass
class StochasticLIFNeuron(BaseNeuron):
    """
    Discrete-time noisy leaky integrate-and-fire neuron.

    dv/dt = -(v - v_rest) / tau_mem + R * I + noise

    Parameters use normalised units (voltage [0,1], time in ms).
    Defaults from Gerstner & Kistler, *Spiking Neuron Models*, 2002.

    Example
    -------
    >>> neuron = StochasticLIFNeuron(v_threshold=1.0, tau_mem=20.0, noise_std=0.0)
    >>> spikes = [neuron.step(1.5) for _ in range(50)]
    >>> sum(spikes) > 0
    True
    >>> neuron.get_state()  # membrane voltage + refractory counter
    {'v': ..., 'refractory': 0}

    Process a bitstream as input current:

    >>> import numpy as np
    >>> bits = np.array([1, 0, 1, 1, 0, 1, 0, 0], dtype=np.uint8)
    >>> neuron.reset_state()
    >>> out = neuron.process_bitstream(bits, input_scale=2.0)
    >>> out.shape
    (8,)
    """

    v_rest: float = LIF_V_REST
    v_reset: float = LIF_V_RESET
    v_threshold: float = LIF_V_THRESHOLD
    tau_mem: float = LIF_TAU_MEM
    dt: float = LIF_DT
    noise_std: float = LIF_NOISE_STD
    resistance: float = LIF_RESISTANCE
    refractory_period: int = LIF_REFRACTORY_PERIOD
    seed: int | None = None
    entropy_source: Any | None = None  # Optional external entropy (e.g. Quantum)

    def __post_init__(self) -> None:
        if self.tau_mem <= 0:
            raise ValueError(f"tau_mem must be > 0, got {self.tau_mem}")
        self._rng = RNG(self.seed)
        self.v = self.v_rest
        self.refractory_counter = 0
        self.reset_state()

    def step(self, input_current: float) -> int:
        if self.refractory_counter > 0:
            self.refractory_counter -= 1
            self.v = self.v_rest
            return 0

        # Membrane leak term
        dv_leak = -(self.v - self.v_rest) * (self.dt / self.tau_mem)

        # Input term (simple Ohm's law; you can absorb R into current)
        dv_input = self.resistance * input_current * self.dt

        # Noise term (Euler-Maruyama: sigma * sqrt(dt) * N(0,1))
        dv_noise = 0.0
        if self.noise_std > 0.0:
            sqrt_dt = self.dt**0.5
            if self.entropy_source is not None:
                dv_noise = float(self.entropy_source.sample_normal(0.0, self.noise_std * sqrt_dt))
            else:
                dv_noise = float(self._rng.normal(0.0, self.noise_std * sqrt_dt))

        # Update membrane potential
        self.v += dv_leak + dv_input + dv_noise

        # Check for spike
        if self.v >= self.v_threshold:
            spike = 1
            self.v = self.v_reset
            self.refractory_counter = self.refractory_period
        else:
            spike = 0
        return spike

    def reset_state(self) -> None:
        self.v = self.v_rest
        self.refractory_counter = 0

    def get_state(self) -> Dict[str, Any]:
        return {"v": float(self.v), "refractory": self.refractory_counter}

    def process_bitstream(
        self, input_bits: np.ndarray[Any, Any], input_scale: float = 1.0
    ) -> np.ndarray[Any, Any]:
        """
        Process a bitstream (array of 0s and 1s) as input current.
        Returns an array of spikes (0s and 1s).

        input_scale: scaling factor to convert bit (0/1) to current amplitude.
        """
        spikes = np.zeros_like(input_bits, dtype=np.uint8)
        for i, bit in enumerate(input_bits):
            # Treat bit as current pulse of amplitude 'input_scale'
            current = bit * input_scale
            spikes[i] = self.step(current)
        return spikes

process_bitstream(input_bits, input_scale=1.0)

Process a bitstream (array of 0s and 1s) as input current. Returns an array of spikes (0s and 1s).

input_scale: scaling factor to convert bit (0/1) to current amplitude.

Source code in src/sc_neurocore/neurons/stochastic_lif.py
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
def process_bitstream(
    self, input_bits: np.ndarray[Any, Any], input_scale: float = 1.0
) -> np.ndarray[Any, Any]:
    """
    Process a bitstream (array of 0s and 1s) as input current.
    Returns an array of spikes (0s and 1s).

    input_scale: scaling factor to convert bit (0/1) to current amplitude.
    """
    spikes = np.zeros_like(input_bits, dtype=np.uint8)
    for i, bit in enumerate(input_bits):
        # Treat bit as current pulse of amplitude 'input_scale'
        current = bit * input_scale
        spikes[i] = self.step(current)
    return spikes

sc_neurocore.neurons.fixed_point_lif.FixedPointLIFNeuron dataclass

Bit-true fixed-point model of the Verilog sc_lif_neuron.

All arithmetic is performed in signed Q(FRACTION) fixed-point with explicit bit-width masking so that overflow/wrap behaviour matches the hardware exactly.

Parameters

data_width : int Total bit width of all fixed-point values (default 16). fraction : int Number of fractional bits (default 8, giving Q8.8). v_rest, v_reset, v_threshold : int Membrane parameters in Q(FRACTION) fixed-point. refractory_period : int Number of clock cycles to hold after a spike.

Example

neuron = FixedPointLIFNeuron() spike, v = neuron.step(leak_k=240, gain_k=16, I_t=100) spike in (0, 1) True neuron.reset()

Source code in src/sc_neurocore/neurons/fixed_point_lif.py
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
@dataclass
class FixedPointLIFNeuron:
    """
    Bit-true fixed-point model of the Verilog ``sc_lif_neuron``.

    All arithmetic is performed in signed Q(FRACTION) fixed-point with
    explicit bit-width masking so that overflow/wrap behaviour matches
    the hardware exactly.

    Parameters
    ----------
    data_width : int
        Total bit width of all fixed-point values (default 16).
    fraction : int
        Number of fractional bits (default 8, giving Q8.8).
    v_rest, v_reset, v_threshold : int
        Membrane parameters in Q(FRACTION) fixed-point.
    refractory_period : int
        Number of clock cycles to hold after a spike.

    Example
    -------
    >>> neuron = FixedPointLIFNeuron()
    >>> spike, v = neuron.step(leak_k=240, gain_k=16, I_t=100)
    >>> spike in (0, 1)
    True
    >>> neuron.reset()
    """

    data_width: int = FP_DATA_WIDTH
    fraction: int = FP_FRACTION
    v_rest: int = 0
    v_reset: int = 0
    v_threshold: int = FP_V_THRESHOLD
    refractory_period: int = FP_REFRACTORY_PERIOD

    def __post_init__(self) -> None:
        if not 1 <= self.data_width <= 32:
            raise ValueError(f"data_width must be in [1, 32], got {self.data_width}")
        if not 0 <= self.fraction < self.data_width:
            raise ValueError(f"fraction must be in [0, data_width), got {self.fraction}")
        if self.refractory_period < 0:
            raise ValueError(f"refractory_period must be >= 0, got {self.refractory_period}")
        self.v: int = self.v_rest
        self.refractory_counter: int = 0

    def step(self, leak_k: int, gain_k: int, I_t: int, noise_in: int = 0) -> tuple[int, int]:
        """
        Execute one clock cycle — bit-true match to Verilog RTL.

        Parameters
        ----------
        leak_k : int   – ALPHA_LEAK in Q(FRACTION)
        gain_k : int   – GAIN_IN in Q(FRACTION)
        I_t    : int   – Input current in Q(FRACTION)
        noise_in : int – External noise in Q(FRACTION)

        Returns
        -------
        (spike, v_out) : tuple[int, int]
        """
        W = self.data_width

        if self.refractory_counter > 0:
            self.refractory_counter -= 1
            self.v = self.v_rest
            return 0, _mask(self.v, W)

        # --- Leak term: (V_REST - v) * leak_k >>> FRACTION ---
        diff = _mask(self.v_rest - self.v, 2 * W)
        leak_mul = diff * leak_k
        # Arithmetic right shift (Python >> is arithmetic for negative ints)
        dv_leak = leak_mul >> self.fraction

        # --- Input term: I_t * gain_k >>> FRACTION ---
        in_mul = I_t * gain_k
        dv_in = in_mul >> self.fraction

        # --- Next membrane potential ---
        v_next = _mask(self.v + dv_leak + dv_in + noise_in, W)

        # --- Threshold check ---
        if v_next >= self.v_threshold:
            spike = 1
            self.v = self.v_reset
            self.refractory_counter = self.refractory_period
        else:
            spike = 0
            self.v = v_next

        return spike, _mask(self.v, W)

    def reset(self) -> None:
        """Reset neuron state to power-on defaults."""
        self.v = self.v_rest
        self.refractory_counter = 0

    # Aliases for BaseNeuron-compatible interface
    def reset_state(self) -> None:
        """Reset internal state (alias for :meth:`reset`)."""
        self.reset()

    def get_state(self) -> Dict[str, Any]:
        """Return dict with internal state."""
        return {
            "v": self.v,
            "refractory_counter": self.refractory_counter,
        }

step(leak_k, gain_k, I_t, noise_in=0)

Execute one clock cycle — bit-true match to Verilog RTL.

Parameters

leak_k : int – ALPHA_LEAK in Q(FRACTION) gain_k : int – GAIN_IN in Q(FRACTION) I_t : int – Input current in Q(FRACTION) noise_in : int – External noise in Q(FRACTION)

Returns

(spike, v_out) : tuple[int, int]

Source code in src/sc_neurocore/neurons/fixed_point_lif.py
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
def step(self, leak_k: int, gain_k: int, I_t: int, noise_in: int = 0) -> tuple[int, int]:
    """
    Execute one clock cycle — bit-true match to Verilog RTL.

    Parameters
    ----------
    leak_k : int   – ALPHA_LEAK in Q(FRACTION)
    gain_k : int   – GAIN_IN in Q(FRACTION)
    I_t    : int   – Input current in Q(FRACTION)
    noise_in : int – External noise in Q(FRACTION)

    Returns
    -------
    (spike, v_out) : tuple[int, int]
    """
    W = self.data_width

    if self.refractory_counter > 0:
        self.refractory_counter -= 1
        self.v = self.v_rest
        return 0, _mask(self.v, W)

    # --- Leak term: (V_REST - v) * leak_k >>> FRACTION ---
    diff = _mask(self.v_rest - self.v, 2 * W)
    leak_mul = diff * leak_k
    # Arithmetic right shift (Python >> is arithmetic for negative ints)
    dv_leak = leak_mul >> self.fraction

    # --- Input term: I_t * gain_k >>> FRACTION ---
    in_mul = I_t * gain_k
    dv_in = in_mul >> self.fraction

    # --- Next membrane potential ---
    v_next = _mask(self.v + dv_leak + dv_in + noise_in, W)

    # --- Threshold check ---
    if v_next >= self.v_threshold:
        spike = 1
        self.v = self.v_reset
        self.refractory_counter = self.refractory_period
    else:
        spike = 0
        self.v = v_next

    return spike, _mask(self.v, W)

reset()

Reset neuron state to power-on defaults.

Source code in src/sc_neurocore/neurons/fixed_point_lif.py
125
126
127
128
def reset(self) -> None:
    """Reset neuron state to power-on defaults."""
    self.v = self.v_rest
    self.refractory_counter = 0

reset_state()

Reset internal state (alias for :meth:reset).

Source code in src/sc_neurocore/neurons/fixed_point_lif.py
131
132
133
def reset_state(self) -> None:
    """Reset internal state (alias for :meth:`reset`)."""
    self.reset()

get_state()

Return dict with internal state.

Source code in src/sc_neurocore/neurons/fixed_point_lif.py
135
136
137
138
139
140
def get_state(self) -> Dict[str, Any]:
    """Return dict with internal state."""
    return {
        "v": self.v,
        "refractory_counter": self.refractory_counter,
    }

sc_neurocore.neurons.sc_izhikevich.SCIzhikevichNeuron dataclass

Bases: BaseNeuron

Stochastic Izhikevich neuron (software-only).

Standard Izhikevich model (IEEE TNN 14(6), 2003): v' = 0.04v^2 + 5v + 140 - u + I + noise u' = a(bv - u)

When v >= 30 mV: spike, then v <- c, u <- u + d.

Example

neuron = SCIzhikevichNeuron(noise_std=0.0) spikes = [neuron.step(10.0) for _ in range(100)] sum(spikes) > 0 # regular spiking with I=10 True

Source code in src/sc_neurocore/neurons/sc_izhikevich.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
@dataclass
class SCIzhikevichNeuron(BaseNeuron):
    """
    Stochastic Izhikevich neuron (software-only).

    Standard Izhikevich model (IEEE TNN 14(6), 2003):
    v' = 0.04*v^2 + 5*v + 140 - u + I + noise
    u' = a*(b*v - u)

    When v >= 30 mV: spike, then v <- c, u <- u + d.

    Example
    -------
    >>> neuron = SCIzhikevichNeuron(noise_std=0.0)
    >>> spikes = [neuron.step(10.0) for _ in range(100)]
    >>> sum(spikes) > 0  # regular spiking with I=10
    True
    """

    a: float = IZH_A
    b: float = IZH_B
    c: float = IZH_C
    d: float = IZH_D
    dt: float = LIF_DT
    noise_std: float = 0.0
    seed: int | None = None

    def __post_init__(self) -> None:
        self._rng = RNG(self.seed)
        self.reset_state()

    def step(self, input_current: float) -> int:
        # Two half-steps for numerical stability on 0.04v² term.
        # Izhikevich (2003) recommends dt ≤ 0.5 ms; we split each dt into two.
        half_dt = self.dt * 0.5
        for _ in range(2):
            dv = (0.04 * self.v**2 + 5 * self.v + 140 - self.u + input_current) * half_dt
            du = (self.a * (self.b * self.v - self.u)) * half_dt
            self.v += dv
            self.u += du

        if self.noise_std > 0.0:
            self.v += float(self._rng.normal(0.0, self.noise_std))

        if self.v >= IZH_SPIKE_THRESHOLD:
            spike = 1
            self.v = self.c
            self.u += self.d
        else:
            spike = 0
        return spike

    def reset_state(self) -> None:
        self.v = self.c  # membrane potential
        self.u = self.b * self.v  # recovery variable

    def get_state(self) -> Dict[str, Any]:
        return {"v": float(self.v), "u": float(self.u)}

sc_neurocore.neurons.homeostatic_lif.HomeostaticLIFNeuron dataclass

Bases: StochasticLIFNeuron

LIF neuron with homeostatic threshold adaptation.

Self-regulates firing rate toward a target setpoint via exponential moving average of spike rate. Based on Turrigiano (2012).

Example

neuron = HomeostaticLIFNeuron(target_rate=0.1, noise_std=0.0) for _ in range(200): ... neuron.step(1.5) neuron.v_threshold != 1.0 # threshold adapted True

Source code in src/sc_neurocore/neurons/homeostatic_lif.py
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
@dataclass
class HomeostaticLIFNeuron(StochasticLIFNeuron):
    """
    LIF neuron with homeostatic threshold adaptation.

    Self-regulates firing rate toward a target setpoint via exponential
    moving average of spike rate. Based on Turrigiano (2012).

    Example
    -------
    >>> neuron = HomeostaticLIFNeuron(target_rate=0.1, noise_std=0.0)
    >>> for _ in range(200):
    ...     neuron.step(1.5)
    >>> neuron.v_threshold != 1.0  # threshold adapted
    True
    """

    target_rate: float = HOMEOSTATIC_TARGET_RATE
    adaptation_rate: float = HOMEOSTATIC_ADAPTATION_RATE
    rate_trace: float = 0.0
    trace_decay: float = HOMEOSTATIC_TRACE_DECAY

    def __post_init__(self) -> None:
        super().__post_init__()
        self.initial_threshold: float = self.v_threshold

    def step(self, input_current: float) -> int:
        spike = super().step(input_current)

        self.rate_trace = self.rate_trace * self.trace_decay + spike * (1.0 - self.trace_decay)

        error = self.rate_trace - self.target_rate
        self.v_threshold += self.adaptation_rate * error
        self.v_threshold = max(
            THRESHOLD_FLOOR,
            min(self.v_threshold, self.initial_threshold * THRESHOLD_CEILING_MULT),
        )

        return spike

    def get_state(self) -> Dict[str, Any]:
        s = super().get_state()
        s["threshold"] = float(self.v_threshold)
        s["rate_trace"] = float(self.rate_trace)
        return s

sc_neurocore.neurons.dendritic.StochasticDendriticNeuron dataclass

XOR-nonlinearity neuron with shunting inhibition.

Implements d1 + d2 - 2*d1*d2 (XOR truth table for binary inputs). Based on Koch, Biophysics of Computation, 1999, Ch. 12.

Source code in src/sc_neurocore/neurons/dendritic.py
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
@dataclass
class StochasticDendriticNeuron:
    """
    XOR-nonlinearity neuron with shunting inhibition.

    Implements ``d1 + d2 - 2*d1*d2`` (XOR truth table for binary inputs).
    Based on Koch, *Biophysics of Computation*, 1999, Ch. 12.
    """

    threshold: float = DENDRITIC_THRESHOLD
    _last_current: float = field(default=0.0, init=False, repr=False)

    def step(self, input_a: float, input_b: float) -> int:
        d1 = input_a
        d2 = input_b

        # XOR nonlinearity: d1 + d2 - 2*d1*d2
        current = d1 + d2 - 2.0 * (d1 * d2)

        self._last_current = current
        if current > self.threshold:
            return 1
        return 0

    def reset_state(self) -> None:
        """Reset internal state to defaults."""
        self._last_current = 0.0

    def get_state(self) -> Dict[str, Any]:
        """Return dict with internal state."""
        return {"last_current": self._last_current, "threshold": self.threshold}

reset_state()

Reset internal state to defaults.

Source code in src/sc_neurocore/neurons/dendritic.py
40
41
42
def reset_state(self) -> None:
    """Reset internal state to defaults."""
    self._last_current = 0.0

get_state()

Return dict with internal state.

Source code in src/sc_neurocore/neurons/dendritic.py
44
45
46
def get_state(self) -> Dict[str, Any]:
    """Return dict with internal state."""
    return {"last_current": self._last_current, "threshold": self.threshold}

Extended Model Library (109 models in neurons/models/)

Integrate-and-Fire Variants (21)

Model File Reference
AdEx adex.py Brette & Gerstner 2005
ExpIF expif.py Fourcaud-Trocme 2003
Lapicque lapicque.py Lapicque 1907
QIF quadratic_if.py Latham 2000
GLIF (5 levels) glif.py Teeter 2018, Allen Institute
MAT mat.py Kobayashi 2009
SFA sfa.py Benda & Herz 2003
Stochastic IF stochastic_if.py Brunel & Hakim 1999
Escape-rate escape_rate.py Gerstner 2000
Fractional LIF fractional_lif.py Lundstrom 2008
COBA LIF coba_lif.py Conductance-based
Perfect Integrator perfect_integrator.py Non-leaky IF
NLIF nlif.py Cubic nonlinearity
Adaptive Threshold adaptive_threshold_if.py Dynamic threshold
PLIF plif.py Fang 2021, learnable tau
Non-Resetting LIF non_resetting_lif.py Kobayashi 2009
Gated LIF gated_lif.py Yao 2022, NeurIPS
Sigma-Delta sigma_delta.py Yoon 2017
TC-LIF tc_lif.py AAAI 2024
Benda-Herz benda_herz.py Benda 2003
Integer QIF iqif.py Lo 2021, fixed-point
Complementary LIF clif.py ICML 2024, dual paths
K-LIF klif.py Learnable scaling
Inhibitory LIF ilif.py 2025, temporal inhibition
E-prop ALIF e_prop_alif.py Bellec 2020, eligibility
Energy LIF energy_lif.py Fardet 2020

Biophysical / Conductance-Based (11)

Model File Reference
Hodgkin-Huxley hodgkin_huxley.py HH 1952 (Nobel Prize)
Connor-Stevens connor_stevens.py Connor 1977, A-type K+
Wang-Buzsaki wang_buzsaki.py Wang 1996, FS interneuron
Pinsky-Rinzel pinsky_rinzel.py Pinsky 1994, 2-compartment
Destexhe destexhe_thalamic.py Destexhe 1993, T-current
Huber-Braun huber_braun.py Braun 1998, cold receptor
Gutkin-Ermentrout gutkin_ermentrout.py Gutkin 1998
Traub-Miles traub_miles.py Traub 1991, hippocampal
Golomb FS golomb_fs.py Golomb 2007, Kv3 channels
Mainen-Sejnowski mainen_sejnowski.py Mainen 1996, axonal Na
Pospischil pospischil.py Pospischil 2008, 5 types

Oscillatory / Qualitative (7)

Model File Reference
FitzHugh-Nagumo fitzhugh_nagumo.py FitzHugh 1961
Morris-Lecar morris_lecar.py Morris 1981
Hindmarsh-Rose hindmarsh_rose.py HR 1984, chaotic bursting
Resonate-and-Fire resonate_and_fire.py Izhikevich 2001
Theta theta.py Ermentrout 1986
FitzHugh-Rinzel fitzhugh_rinzel.py FitzHugh 1976, 3D
Terman-Wang terman_wang.py Terman 1995, LEGION

Bursting (5)

Model File Reference
Chay chay.py Chay 1985, pancreatic beta
Butera butera_respiratory.py Butera 1999, respiratory
Sherman-Rinzel-Keizer sherman_rinzel_keizer.py Sherman 1988
Plant R15 plant_r15.py Plant 1981, Aplysia
Bertram Phantom bertram_phantom.py Bertram 2008
Pernarowski pernarowski.py Pernarowski 1994

Multi-Compartment (4)

Model File Reference
Hay L5 Pyramidal hay_l5.py Hay 2011, 3-compartment BAC firing
Booth-Rinzel booth_rinzel.py Booth 1995, bistable motoneuron
Dendrify dendrify.py Beniaguev 2022, active dendrite
TC-LIF tc_lif.py AAAI 2024, soma+dendrite

Synaptic (3)

Alpha, Synaptic (dual-exp), Tsodyks-Markram (STP)

Map-Based / Discrete (6)

Rulkov, Chialvo, Courbage-Nekorkin, Medvedev, Ibarz-Tanaka, Cazelles

Stochastic (4)

Poisson, Inhomogeneous Poisson, Galves-Locherbach, GLM (Pillow 2008)

Population / Neural Mass (7)

Wilson-Cowan, Jansen-Rit (EEG), Wong-Wang (decision), Ermentrout-Kopell (exact mean-field), Amari (neural field), Wendling (extended JR, epilepsy EEG), Larter-Breakspear (TVB whole-brain)

Hardware-Specific (9)

Loihi CUBA, Loihi 2, TrueNorth, BrainScaleS AdEx, SpiNNaker LIF, SpiNNaker2, DPI/DYNAP-SE, Akida, Sigma-Delta

Rate Models (3)

McCulloch-Pitts (1943), Sigmoid Rate, Threshold-Linear (ReLU)

Other (5)

SRM/SRM0 (kernel), McKean (piecewise FHN), Leaky-Compete-Fire (WTA), Prescott (Type I/II/III), Compte (NMDA working memory)

Multi-Compartment (3)

Pinsky-Rinzel (2-comp), Booth-Rinzel (motoneuron), TC-LIF (soma+dendrite)

PyTorch Training Cells (10)

Differentiable spiking neurons for surrogate gradient training:

Cell Module Reference
LIFCell training.snn_modules Standard LIF
IFCell training.snn_modules No leak
SynapticCell training.snn_modules Dual-exponential
ALIFCell training.snn_modules Bellec 2020
RecurrentLIFCell training.snn_modules Orthogonal init
ExpIFCell training.snn_modules Exponential
AdExCell training.snn_modules Adaptive exponential
LapicqueCell training.snn_modules RC circuit
AlphaCell training.snn_modules Alpha synapse
SecondOrderLIFCell training.snn_modules Inertial term