Skip to content

Monitors & Stimulus Sources

Modules: sc_neurocore.network.monitor, sc_neurocore.network.stimulus Source: src/sc_neurocore/network/monitor.py (173 LOC) + src/sc_neurocore/network/stimulus.py (68 LOC) Status (v3.14.0): all six classes (3 monitors + 3 stimuli) wired into Network.run; 21 dedicated tests pass; pure-Python — no Rust path needed (simple data containers).

This page covers the recording side (SpikeMonitor, StateMonitor, RateMonitor) and the stimulus side (TimedArray, PoissonInput, StepCurrent) of the network simulation engine. The orchestrator that drives them lives in api/network.md.


1. Where these classes fit in the simulation loop

Each timestep the network executes (network.py:170-onwards):

Text Only
zero population currents
─► apply_stimuli(t, dt)              ← stimulus.get_current(t [, dt])
─► apply_projections(last_spikes)
─► step populations → spike vectors
─► record(pop, spikes, t, dt)        ← monitors observe here
─► update_plasticity

apply_stimuli walks network.stimuli and calls get_current(t) (or get_current(t, dt) for the time-aware variants), adds the result to the target population's current accumulator. record walks each monitor list and forwards spikes/state to whichever monitors target each population.

Monitors are passive observers — they never alter dynamics. Stimuli are active sources — they inject current. Both are accepted by Network(*objects) via isinstance dispatch (network.py:63).

Each monitor and stimulus carries a target: Population | None attribute. For stimuli, target=None means "broadcast to populations[0]" (network.py:201). For monitors, the population is set in the constructor and is required.


2. SpikeMonitor

Python
SpikeMonitor(population: Population, label: str | None = None)

Records (neuron_id, timestep) events from a population. Two ingestion paths so the same monitor works for both Python and Rust backends:

Method Caller Input
record(spikes, t_step) Python backend (Network._record) binary spike vector np.ndarray[int8] shape (n,)
record_event(neuron_id, t_step) Rust backend (Network._run_rust) one decoded (int, int) event per spike

record calls np.nonzero on the spike vector, then appends each (int, int) pair. record_event appends directly. Internal storage is two parallel Python lists _neuron_ids: list[int] and _timesteps: list[int] — kept as lists rather than numpy arrays because appends to numpy arrays are O(n).

2.1 Read-out helpers

Property/method Returns What it does
spike_times np.ndarray[int64] length=count every spike's timestep
spike_trains dict[int, np.ndarray[int64]] per-neuron sorted timestep arrays
count int total spikes recorded
raster_data() (times: ndarray, ids: ndarray) tuple ready for plt.plot(times, ids, '.')
firing_rates(n_steps, dt) np.ndarray[n] Hz mean rate per neuron over n_steps × dt seconds
isi(neuron) np.ndarray[int64] inter-spike intervals (timestep units) for one neuron
cross_correlation(i, j, max_lag) (corr, lags) delegates to sc_neurocore.analysis.spike_stats.cross_correlation

spike_trains builds the dict each call (no caching) — for very long recordings cache the result.

firing_rates divides total spikes by simulation duration in seconds (n_steps × dt). Returns Hz per neuron with no smoothing — a sharp quantity for short runs.

isi(neuron) returns differences in timestep units, not seconds. Multiply by dt to convert.

cross_correlation(i, j) builds binary spike vectors of length max(spike_times)+1, hands them to the analysis backend, and returns (correlation, lags) arrays of length 2 × max_lag + 1. Empty trains for either neuron return zeros (no exception).

2.2 Example

Python
from sc_neurocore.network import (
    Network, Population, Projection, SpikeMonitor, PoissonInput
)

pop = Population("LapicqueNeuron", n=200)
proj = Projection(pop, pop, weight=0.05, probability=0.2, seed=7)
stim = PoissonInput(n=200, rate_hz=500.0, weight=2.0, seed=11)
mon = SpikeMonitor(pop)

net = Network(pop, proj, stim, mon, seed=1)
net.run(duration=0.2, dt=0.001, backend="python")

# Inspect
print(mon.count, "spikes")               # 782
times, ids = mon.raster_data()
rates_hz = mon.firing_rates(n_steps=200, dt=0.001)
print(rates_hz.mean(), "Hz mean")

3. StateMonitor

Python
StateMonitor(
    population: Population,
    variables: list[str] | None = None,   # default ["v"]
    record: list[int] | None = None,      # default = all neurons
)

Captures snapshots of named state variables every time the network calls monitor.snapshot(t_step) (which is once per timestep when the monitor is attached to a population that fires at least one neuron in that step).

variables lists state names to record. The monitor reads them by calling population.get_states(), which itself uses (in order):

  1. neurons[0].get_state() if defined
  2. __dataclass_fields__ if the neuron is a dataclass (excluding dt)
  3. fall back to ["v"]

record optionally subsets which neurons to record (saves memory for large populations when only a few neurons matter).

3.1 Read-out

Property Returns Shape
traces dict[str, np.ndarray] keyed by variable name (n_steps, n_recorded) per variable
t np.ndarray[int64] timestep array, same length as first dim of traces[v]

If record is set, the second axis has length len(record); otherwise it matches population.n.

3.2 Example

Python
from sc_neurocore.network import StateMonitor

mon = StateMonitor(pop, variables=["v"], record=[0, 50, 100, 150, 199])
net = Network(pop, proj, stim, mon, seed=1)
net.run(0.5, dt=0.001)

import matplotlib.pyplot as plt
for i in range(5):
    plt.plot(mon.t * 0.001, mon.traces["v"][:, i], label=f"neuron {[0,50,100,150,199][i]}")
plt.xlabel("time (s)"); plt.ylabel("V (a.u.)"); plt.legend()

4. RateMonitor

Python
RateMonitor(population: Population, bin_ms: int = 10)

Bins spike counts into fixed-duration windows and converts to per-bin mean firing rate (Hz averaged over the population).

Internally each call to record(spikes, t_step, dt) accumulates int(spikes.sum()) and increments a step counter. When the counter reaches steps_per_bin = max(1, int(bin_ms / 1000.0 / dt)), the bin is flushed: the count is appended to _spike_counts, the timestep to _bin_edges, both internal counters reset.

4.1 Output

Property Returns Note
rate np.ndarray[float64] Hz per bin count / (bin_seconds × population.n)
t np.ndarray[int64] timestep edges where each bin flushed

Empty rate is returned as np.array([], dtype=float64) if n_steps is shorter than one bin.

4.2 Bin-edge discretisation

steps_per_bin = max(1, int(bin_ms / 1000.0 / dt)) truncates: at dt = 0.001 and bin_ms = 7, steps_per_bin is 7 — matching the nominal bin. At dt = 0.0005 and bin_ms = 10, steps_per_bin is 20 (also matching). At dt = 0.001 and bin_ms = 0 (degenerate), the clamp gives steps_per_bin = 1 — every step becomes a bin.

The flush check is if self._steps_in_bin >= steps_per_bin, so the last incomplete bin is dropped if the simulation ends mid-window.


5. TimedArray — pre-computed time-varying current

Python
TimedArray(values: np.ndarray | list[float], dt: float = 0.001)

Holds a 1-D array of scalar currents. get_current(t_step) returns values[min(t_step, len(values) - 1)]. Past the end of the array the last value is held forever — this matches Brian2's TimedArray clamp semantics.

dt is informational only at present (the network does not resample the stimulus to its own dt); callers must ensure values was built at the network's dt.

Python
import numpy as np
ramp = TimedArray(np.linspace(0, 1.0, 1000), dt=0.001)
ramp.target = pop  # 1-second linear ramp into pop

6. PoissonInput — random Poisson spike train

Python
PoissonInput(n: int, rate_hz: float, weight: float,
             dt: float = 0.001, seed: int = 42)

Each call to get_current(t_step, dt=None) draws n Bernoulli samples with probability rate_hz × dt, multiplies by weight, and returns a weighted current vector. The optional dt argument lets the network pass its own timestep; if omitted the per-stimulus dt is used.

The internal RNG is a np.random.default_rng(seed) — runs are deterministic when seeds are pinned.

Python
# 80 Hz mean, weight 0.5, 100 inputs, deterministic
stim = PoissonInput(n=100, rate_hz=80.0, weight=0.5, dt=0.001, seed=11)
stim.target = pop

For high rate_hz × dt (e.g. > 0.5) the Bernoulli model under-counts true Poisson — switch to a true Poisson draw (rng.poisson(...)) if biological accuracy at high rates matters.


7. StepCurrent — rectangular pulse

Python
StepCurrent(onset: int, offset: int, amplitude: float)

Returns amplitude if onset ≤ t_step < offset, else 0.0. Onset is inclusive, offset exclusive. get_current(t_step, dt=0.001) ignores dt because the gate is purely on t_step.

Python
pulse = StepCurrent(onset=100, offset=200, amplitude=2.0)
pulse.target = pop  # injects 2.0 from step 100 (incl) to 200 (excl)

For sub-step or negative-amplitude shapes, build a TimedArray instead.


8. Performance — recording overhead

Measured on this workstation (Intel i5-11600K, Python 3.12.3) with the network setup from api/network.md §11 (n=500 LapicqueNeuron, recurrent random p=0.2, Poisson 500 Hz w=2.0, 200 steps @ dt=1 ms, 3-run median).

Configuration Median wall Δ vs baseline
baseline (no monitors) 203.8 ms
+ SpikeMonitor 239.3 ms +35.5 ms (+17 %)
+ StateMonitor(['v']) 322.8 ms +119.0 ms (+58 %)
+ RateMonitor(bin_ms=10) 238.8 ms +35.0 ms (+17 %)
+ all three 296.2 ms +92.4 ms (+45 %)

Observations:

  • StateMonitor is the most expensive because every step it copies the full state-variable array (default population.n entries) into a Python list. For large populations, set record=[indices] to a small subset.
  • SpikeMonitor cost is sparse-driven — it appends one Python (int, int) per spike via np.nonzero + a Python loop. For high firing rates (>100 Hz × n>1000) it dominates.
  • RateMonitor cost is constant per step — one spikes.sum() and a step counter increment. Bin size doesn't affect per-step cost, only the total number of flushes.
  • All three together is sub-additive because their internal work doesn't pay full Python overhead three times — the network's _record pass walks each monitor list once.

These overheads are pure-Python; the Rust backend records via record_event once per spike at the end of the run, so the per-step cost evaporates.

8.1 No Rust path

monitor.py and stimulus.py are intentionally pure-Python data containers. SpikeMonitor.record_event is the only method called from Rust; it is a one-line append. There is no compute kernel to Rustify (unlike projection.py or topology.py, both of which have planned Rust paths in task #13).

PoissonInput.get_current does call rng.random(self.n) per step. For very large n (>10 000) at high rate_hz × dt, the Bernoulli draw becomes measurable and could be vectorised in a Rust extension — currently not on the roadmap.


9. Pipeline wiring

Surface How it's wired Verifier
from sc_neurocore.network import SpikeMonitor, ... network/__init__.py:13-25 tests/test_network_monitors_stimulus.py
Network(..., spike_monitor) registration Network.add isinstance chain (network.py:65) TestSpikeMonitor::test_record_*
Python backend invokes mon.record(spikes, t) Network._record (network.py:229) test_records_voltage
Rust backend invokes mon.record_event(nid, t) Network._run_rust decode loop (network.py:155) (not test-covered without Rust wheel)
Stimulus targeting stim.target = pop then Network.add(stim) apply_stimuli (network.py:199)
cross_correlation imports analysis.spike_stats.cross_correlation lazily test_network_basic.py indirectly

All six public symbols re-exported from sc_neurocore.network are wired into the simulation loop; none are orphan helpers.


10. Audit (7-point checklist)

# Dimension Status Detail
1 Pipeline wiring ✅ PASS All six classes wired via Network.add dispatch
2 Multi-angle tests ✅ PASS 21 tests across 6 Test* classes covering construction, recording, edge cases (empty, off-window, clamp), label, default variables, bin accumulation, deterministic seeding
3 Rust path N/A Pure-Python data containers; record_event is the only Rust ingestion point and trivial
4 Benchmarks ✅ PASS §8 measured this session; 3-run median per config
5 Performance docs ✅ PASS §8 + §8.1
6 Documentation page ✅ PASS This page
7 Rules followed ✅ PASS SPDX headers ✅; no # noqa; no # type: ignore; British English in this doc

Net: 0 WARN, 0 FAIL.


11. Known issues

11.1 TimedArray.dt is informational only

The constructor accepts dt but the network never resamples the stored values array against its own dt. If the user builds a TimedArray with dt=0.0005 and passes it to a network running at dt=0.001, the stimulus advances at half the intended rate (every other network step uses the same values[t_step]). Consider documenting the contract explicitly or auto-resampling.

11.2 PoissonInput is Bernoulli, not Poisson

For low rate_hz × dt (< 0.1) the difference is negligible. For high rate_hz × dt (> 0.5) the Bernoulli draw under-counts. If high-rate biological accuracy matters, replace with rng.poisson(rate_hz × dt, n) and accept fractional spike currents.

11.3 RateMonitor drops the last incomplete bin

Final bin is silently lost if the simulation ends mid-window. Either pad the run to a multiple of bin_ms, or accept that len(rm.rate) is floor(n_steps / steps_per_bin).

11.4 No record_event test coverage

SpikeMonitor.record_event (the Rust ingestion path) is exercised only when the Rust engine wheel is installed. With the wheel absent (current environment), no test verifies the decode of the u64 = nid<<32 | t packed events. Tracked alongside MPIRunner tests as part of the broader testing gap (task #17 adds real mpirun MPIRunner coverage; consider extending the same test scope to record_event).


12. Tests

Bash
PYTHONPATH=src python3 -m pytest tests/test_network_monitors_stimulus.py -q
# 21 passed (verified 2026-04-17, suite runs in ~2 s including imports)

Covered (per Test* class):

  • TestSpikeMonitor (5 tests): empty after init, single spike, multiple spikes, spike_trains dict construction, label override
  • TestStateMonitor (3): no-spike empty trains, voltage recording, default variable is "v"
  • TestRateMonitor (2): empty after init, bin accumulation
  • TestTimedArray (4): value-at-step, clamp past end, accepts numpy array, single-value
  • TestStepCurrent (5): zero outside window, amplitude inside, onset inclusive, offset exclusive, negative amplitude
  • TestPoissonInput (2): creation, rate stored

Not covered:

  • record_event (Rust ingestion path) — see §11.4
  • High-throughput stress (e.g. n=10 000, 60 Hz, 10 s) — would surface the StateMonitor list-copy cost
  • Mixing multiple monitors of the same type on the same population — the network supports it (each is in its own list) but it isn't asserted
  • cross_correlation end-to-end output (only invocation path is tested through test_network_basic.py)

13. References

  • Brian 2 monitor API (semantic ancestor): Stimberg M., Brette R., Goodman D. F. M. "Brian 2, an intuitive and efficient neural simulator." eLife 8:e47314 (2019).
  • NEST recording devices (semantic ancestor): Eppler J. M. et al. "PyNEST." Front Neuroinform 2:12 (2008).
  • Cross-correlogram method: Perkel D. H., Gerstein G. L., Moore G. P. "Neuronal Spike Trains and Stochastic Point Processes. II. Simultaneous Spike Trains." Biophysical Journal 7:419-440 (1967).

Internal:


14. Auto-rendered API

sc_neurocore.network.monitor

Monitors: spike, state, and rate recording during simulation.

SpikeMonitor

Records (neuron_idx, timestep) pairs from a population.

Source code in src/sc_neurocore/network/monitor.py
Python
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
class SpikeMonitor:
    """Records (neuron_idx, timestep) pairs from a population."""

    def __init__(self, population: Population, label: str | None = None) -> None:
        self.population = population
        self.label = label or f"spikes_{population.label}"
        self._neuron_ids: list[int] = []
        self._timesteps: list[int] = []

    def record(self, spikes: np.ndarray, t_step: int) -> None:
        """Store spike events for this timestep (from binary spike vector)."""
        idx = np.nonzero(spikes)[0]
        for i in idx:
            self._neuron_ids.append(int(i))
            self._timesteps.append(t_step)

    def record_event(self, neuron_id: int, t_step: int) -> None:
        """Store a single spike event directly (from Rust backend)."""
        self._neuron_ids.append(neuron_id)
        self._timesteps.append(t_step)

    @property
    def spike_times(self) -> np.ndarray:
        """All spike timesteps as 1-D array."""
        return np.array(self._timesteps, dtype=np.int64)

    @property
    def spike_trains(self) -> dict[int, np.ndarray]:
        """Per-neuron spike timestep arrays."""
        trains: dict[int, list[int]] = {}
        for nid, ts in zip(self._neuron_ids, self._timesteps):
            trains.setdefault(nid, []).append(ts)
        return {k: np.array(v, dtype=np.int64) for k, v in trains.items()}

    @property
    def count(self) -> int:
        """Total number of spikes recorded."""
        return len(self._neuron_ids)

    def raster_data(self) -> tuple[np.ndarray, np.ndarray]:
        """Return (timesteps, neuron_ids) arrays for raster plots."""
        return (
            np.array(self._timesteps, dtype=np.int64),
            np.array(self._neuron_ids, dtype=np.int64),
        )

    def firing_rates(self, n_steps: int, dt: float = 0.001) -> np.ndarray:
        """Mean firing rate (Hz) per neuron over the simulation."""
        duration = n_steps * dt
        rates = np.zeros(self.population.n, dtype=np.float64)
        if duration <= 0:
            return rates
        for nid in self._neuron_ids:
            rates[nid] += 1.0
        rates /= duration
        return rates

    def isi(self, neuron: int) -> np.ndarray:
        """Inter-spike intervals (timestep units) for a single neuron."""
        trains = self.spike_trains
        ts = trains.get(neuron, np.array([], dtype=np.int64))
        if ts.size < 2:
            return np.array([], dtype=np.int64)
        return np.diff(ts)

    def cross_correlation(self, i: int, j: int, max_lag: int = 50) -> tuple[np.ndarray, np.ndarray]:
        """Cross-correlogram between neurons i and j."""
        from sc_neurocore.analysis.spike_stats import cross_correlation as _cc

        trains = self.spike_trains
        ts_i = trains.get(i, np.array([], dtype=np.int64))
        ts_j = trains.get(j, np.array([], dtype=np.int64))
        if ts_i.size == 0 or ts_j.size == 0:
            lags = np.arange(-max_lag, max_lag + 1)
            return np.zeros(len(lags)), lags
        max_t = max(ts_i.max(), ts_j.max()) + 1
        bin_i = np.zeros(max_t, dtype=np.int8)
        bin_j = np.zeros(max_t, dtype=np.int8)
        bin_i[ts_i] = 1
        bin_j[ts_j] = 1
        return _cc(bin_i, bin_j, max_lag_ms=max_lag, dt=1.0)

spike_times property

All spike timesteps as 1-D array.

spike_trains property

Per-neuron spike timestep arrays.

count property

Total number of spikes recorded.

record(spikes, t_step)

Store spike events for this timestep (from binary spike vector).

Source code in src/sc_neurocore/network/monitor.py
Python
30
31
32
33
34
35
def record(self, spikes: np.ndarray, t_step: int) -> None:
    """Store spike events for this timestep (from binary spike vector)."""
    idx = np.nonzero(spikes)[0]
    for i in idx:
        self._neuron_ids.append(int(i))
        self._timesteps.append(t_step)

record_event(neuron_id, t_step)

Store a single spike event directly (from Rust backend).

Source code in src/sc_neurocore/network/monitor.py
Python
37
38
39
40
def record_event(self, neuron_id: int, t_step: int) -> None:
    """Store a single spike event directly (from Rust backend)."""
    self._neuron_ids.append(neuron_id)
    self._timesteps.append(t_step)

raster_data()

Return (timesteps, neuron_ids) arrays for raster plots.

Source code in src/sc_neurocore/network/monitor.py
Python
60
61
62
63
64
65
def raster_data(self) -> tuple[np.ndarray, np.ndarray]:
    """Return (timesteps, neuron_ids) arrays for raster plots."""
    return (
        np.array(self._timesteps, dtype=np.int64),
        np.array(self._neuron_ids, dtype=np.int64),
    )

firing_rates(n_steps, dt=0.001)

Mean firing rate (Hz) per neuron over the simulation.

Source code in src/sc_neurocore/network/monitor.py
Python
67
68
69
70
71
72
73
74
75
76
def firing_rates(self, n_steps: int, dt: float = 0.001) -> np.ndarray:
    """Mean firing rate (Hz) per neuron over the simulation."""
    duration = n_steps * dt
    rates = np.zeros(self.population.n, dtype=np.float64)
    if duration <= 0:
        return rates
    for nid in self._neuron_ids:
        rates[nid] += 1.0
    rates /= duration
    return rates

isi(neuron)

Inter-spike intervals (timestep units) for a single neuron.

Source code in src/sc_neurocore/network/monitor.py
Python
78
79
80
81
82
83
84
def isi(self, neuron: int) -> np.ndarray:
    """Inter-spike intervals (timestep units) for a single neuron."""
    trains = self.spike_trains
    ts = trains.get(neuron, np.array([], dtype=np.int64))
    if ts.size < 2:
        return np.array([], dtype=np.int64)
    return np.diff(ts)

cross_correlation(i, j, max_lag=50)

Cross-correlogram between neurons i and j.

Source code in src/sc_neurocore/network/monitor.py
Python
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
def cross_correlation(self, i: int, j: int, max_lag: int = 50) -> tuple[np.ndarray, np.ndarray]:
    """Cross-correlogram between neurons i and j."""
    from sc_neurocore.analysis.spike_stats import cross_correlation as _cc

    trains = self.spike_trains
    ts_i = trains.get(i, np.array([], dtype=np.int64))
    ts_j = trains.get(j, np.array([], dtype=np.int64))
    if ts_i.size == 0 or ts_j.size == 0:
        lags = np.arange(-max_lag, max_lag + 1)
        return np.zeros(len(lags)), lags
    max_t = max(ts_i.max(), ts_j.max()) + 1
    bin_i = np.zeros(max_t, dtype=np.int8)
    bin_j = np.zeros(max_t, dtype=np.int8)
    bin_i[ts_i] = 1
    bin_j[ts_j] = 1
    return _cc(bin_i, bin_j, max_lag_ms=max_lag, dt=1.0)

StateMonitor

Records state variable traces from a population.

Source code in src/sc_neurocore/network/monitor.py
Python
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
class StateMonitor:
    """Records state variable traces from a population."""

    def __init__(
        self,
        population: Population,
        variables: list[str] | None = None,
        record: list[int] | None = None,
    ) -> None:
        self.population = population
        self.variables = variables or ["v"]
        self.record = record
        self._data: dict[str, list[np.ndarray]] = {v: [] for v in self.variables}
        self._t: list[int] = []

    def snapshot(self, t_step: int) -> None:
        """Capture current state variables."""
        self._t.append(t_step)
        states = self.population.get_states()
        for v in self.variables:
            arr = states.get(v, np.zeros(self.population.n))
            if self.record is not None:
                arr = arr[np.array(self.record)]
            self._data[v].append(arr.copy())

    @property
    def traces(self) -> dict[str, np.ndarray]:
        """Variable traces as {name: (n_steps, n_neurons)} arrays."""
        return {k: np.array(v) if v else np.empty((0, 0)) for k, v in self._data.items()}

    @property
    def t(self) -> np.ndarray:
        """Timestep array."""
        return np.array(self._t, dtype=np.int64)

traces property

Variable traces as {name: (n_steps, n_neurons)} arrays.

t property

Timestep array.

snapshot(t_step)

Capture current state variables.

Source code in src/sc_neurocore/network/monitor.py
Python
119
120
121
122
123
124
125
126
127
def snapshot(self, t_step: int) -> None:
    """Capture current state variables."""
    self._t.append(t_step)
    states = self.population.get_states()
    for v in self.variables:
        arr = states.get(v, np.zeros(self.population.n))
        if self.record is not None:
            arr = arr[np.array(self.record)]
        self._data[v].append(arr.copy())

RateMonitor

Population firing rate in time bins.

Source code in src/sc_neurocore/network/monitor.py
Python
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
class RateMonitor:
    """Population firing rate in time bins."""

    def __init__(self, population: Population, bin_ms: int = 10) -> None:
        self.population = population
        self.bin_ms = bin_ms
        self._spike_counts: list[int] = []
        self._bin_edges: list[int] = []
        self._current_count = 0
        self._steps_in_bin = 0

    def record(self, spikes: np.ndarray, t_step: int, dt: float = 0.001) -> None:
        """Accumulate spikes; flush when a bin completes."""
        self._current_count += int(spikes.sum())
        self._steps_in_bin += 1
        steps_per_bin = max(1, int(self.bin_ms / 1000.0 / dt))
        if self._steps_in_bin >= steps_per_bin:
            self._spike_counts.append(self._current_count)
            self._bin_edges.append(t_step)
            self._current_count = 0
            self._steps_in_bin = 0

    @property
    def rate(self) -> np.ndarray:
        """Firing rate (Hz) per bin."""
        if not self._spike_counts:
            return np.array([], dtype=np.float64)
        duration_s = self.bin_ms / 1000.0
        counts = np.array(self._spike_counts, dtype=np.float64)
        return counts / (duration_s * self.population.n)

    @property
    def t(self) -> np.ndarray:
        """Bin edge timestep array."""
        return np.array(self._bin_edges, dtype=np.int64)

rate property

Firing rate (Hz) per bin.

t property

Bin edge timestep array.

record(spikes, t_step, dt=0.001)

Accumulate spikes; flush when a bin completes.

Source code in src/sc_neurocore/network/monitor.py
Python
151
152
153
154
155
156
157
158
159
160
def record(self, spikes: np.ndarray, t_step: int, dt: float = 0.001) -> None:
    """Accumulate spikes; flush when a bin completes."""
    self._current_count += int(spikes.sum())
    self._steps_in_bin += 1
    steps_per_bin = max(1, int(self.bin_ms / 1000.0 / dt))
    if self._steps_in_bin >= steps_per_bin:
        self._spike_counts.append(self._current_count)
        self._bin_edges.append(t_step)
        self._current_count = 0
        self._steps_in_bin = 0

sc_neurocore.network.stimulus

Stimulus sources for network simulations.

TimedArray

Time-varying current from a pre-computed array.

Source code in src/sc_neurocore/network/stimulus.py
Python
21
22
23
24
25
26
27
28
29
30
31
32
class TimedArray:
    """Time-varying current from a pre-computed array."""

    def __init__(self, values: np.ndarray | list[float], dt: float = 0.001) -> None:
        self.values = np.asarray(values, dtype=np.float64)
        self.dt = dt
        self.target: Population | None = None

    def get_current(self, t_step: int) -> float:
        """Return the value at timestep t_step (clamps to last value)."""
        idx = min(t_step, len(self.values) - 1)
        return float(self.values[idx])

get_current(t_step)

Return the value at timestep t_step (clamps to last value).

Source code in src/sc_neurocore/network/stimulus.py
Python
29
30
31
32
def get_current(self, t_step: int) -> float:
    """Return the value at timestep t_step (clamps to last value)."""
    idx = min(t_step, len(self.values) - 1)
    return float(self.values[idx])

PoissonInput

Random Poisson spike input producing weighted current.

Source code in src/sc_neurocore/network/stimulus.py
Python
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
class PoissonInput:
    """Random Poisson spike input producing weighted current."""

    def __init__(
        self, n: int, rate_hz: float, weight: float, dt: float = 0.001, seed: int = 42
    ) -> None:
        self.n = n
        self.rate_hz = rate_hz
        self.weight = weight
        self.dt = dt
        self._rng = np.random.default_rng(seed)
        self.target: Population | None = None

    def get_current(self, t_step: int, dt: float | None = None) -> np.ndarray:
        """Generate Poisson spikes and return weighted current vector."""
        step_dt = dt if dt is not None else self.dt
        p_spike = self.rate_hz * step_dt
        spikes = (self._rng.random(self.n) < p_spike).astype(np.float64)
        return spikes * self.weight

get_current(t_step, dt=None)

Generate Poisson spikes and return weighted current vector.

Source code in src/sc_neurocore/network/stimulus.py
Python
48
49
50
51
52
53
def get_current(self, t_step: int, dt: float | None = None) -> np.ndarray:
    """Generate Poisson spikes and return weighted current vector."""
    step_dt = dt if dt is not None else self.dt
    p_spike = self.rate_hz * step_dt
    spikes = (self._rng.random(self.n) < p_spike).astype(np.float64)
    return spikes * self.weight

StepCurrent

Rectangular step current between onset and offset timesteps.

Source code in src/sc_neurocore/network/stimulus.py
Python
56
57
58
59
60
61
62
63
64
65
66
67
68
69
class StepCurrent:
    """Rectangular step current between onset and offset timesteps."""

    def __init__(self, onset: int, offset: int, amplitude: float) -> None:
        self.onset = onset
        self.offset = offset
        self.amplitude = amplitude
        self.target: Population | None = None

    def get_current(self, t_step: int, dt: float = 0.001) -> float:
        """Return amplitude if within [onset, offset), else 0."""
        if self.onset <= t_step < self.offset:
            return self.amplitude
        return 0.0

get_current(t_step, dt=0.001)

Return amplitude if within [onset, offset), else 0.

Source code in src/sc_neurocore/network/stimulus.py
Python
65
66
67
68
69
def get_current(self, t_step: int, dt: float = 0.001) -> float:
    """Return amplitude if within [onset, offset), else 0."""
    if self.onset <= t_step < self.offset:
        return self.amplitude
    return 0.0