Network Simulation Engine¶
Module: sc_neurocore.network (re-exported from sc_neurocore.network.__init__)
Source: src/sc_neurocore/network/ — 12 files, 4065 LOC
Status (v3.14.0): core orchestrator + Population/Projection/topology/monitors fully wired; Rust network dispatch and MPI per-rank Rust dispatch are implemented, with Python fallback when the engine wheel is absent; MPIRunner has 12 mocked-mpi4py tests; topology/projection compute paths are still pure-Python (no Rust path yet).
This page covers the simulation engine — the declarative Network
container, populations of neurons, sparse Projection connectivity (with
delays + STDP), connectivity generators, spike/state/rate monitors, stimulus
sources, the multi-backend dispatcher (auto/python/rust/mpi), and the
LIF-network Verilog exporter. Two specialised circuit models that ship in
the same directory will be documented on their own pages once written
(planned: api/cortical_column.md, api/gamma_oscillation.md); both are
flagged in §14.1 below for fidelity violations.
1. Overview¶
The simulation engine is declarative: you instantiate populations and
projections, hand them to a Network, and call .run(duration, dt). The
network registers each object by type (Population, Projection,
SpikeMonitor, StateMonitor, RateMonitor, TimedArray, PoissonInput,
StepCurrent) and runs the appropriate per-step pipeline:
each timestep t:
for each population: zero its current accumulator
apply stimuli → currents
apply projections (CSR matvec + delay) → currents
for each population: step neurons, record spikes
update plasticity (STDP) on projections that have it
optional: Fisher Information Metric self-observation feedback
Network.run selects one of three backends:
'python'— the pure-Python loop above'rust'— delegates tosc_neurocore_engine.NetworkRunner(PyO3) when every population'smodel_nameis inNetworkRunner.supported_models()and there are no stimuli and no plasticity'mpi'— partitions populations round-robin across MPI ranks and exchanges spikes viaAllgatherv
'auto' (default) picks 'rust' when its preconditions hold, otherwise
falls back to 'python'.
2. Public Surface¶
The module re-exports 18 public symbols from
sc_neurocore.network.__init__:
| Symbol | Source file | Role |
|---|---|---|
Network |
network.py |
Top-level orchestrator + dispatcher |
Population |
population.py |
Vectorised group of identical neurons |
Projection |
projection.py |
CSR connectivity + delay + STDP |
SpikeMonitor |
monitor.py |
Records (neuron_id, timestep) pairs |
StateMonitor |
monitor.py |
Records state-variable traces |
RateMonitor |
monitor.py |
Population firing rate per bin |
TimedArray |
stimulus.py |
Time-varying scalar stimulus |
PoissonInput |
stimulus.py |
Random Poisson spike trains |
StepCurrent |
stimulus.py |
Rectangular step current |
random_connectivity |
topology.py |
Erdős–Rényi |
small_world |
topology.py |
Watts–Strogatz |
scale_free |
topology.py |
Barabási–Albert |
ring_topology |
topology.py |
Ring with k-NN |
grid_topology |
topology.py |
2-D Manhattan-radius lattice |
all_to_all |
topology.py |
Dense connectivity |
export_verilog |
export.py |
Multi-population LIF → Verilog |
MPIRunner |
mpi_runner.py |
Distributed simulation runner |
HAS_MPI |
mpi_runner.py |
bool — was mpi4py importable? |
The submodule cortical_column exports CorticalColumn; the submodule
gamma_oscillation exports PINGCircuit. Both are documented separately.
3. Network — orchestrator¶
3.1 Constructor¶
Network(*objects, seed: int = 42, fim_lambda: float = 0.0)
Accepts any number of Population, Projection, monitor or stimulus objects
positionally. Each is registered into the corresponding internal list by
isinstance dispatch; unknown types raise TypeError.
fim_lambda enables a Fisher Information Metric self-observation
feedback (_apply_fim, network.py:249): each timestep the per-source
weight is pulled toward the population mean of its source spike vector by
λ_fim · (activity_i − μ) / N. The mechanism is derived from
scpn-quantum-control NB26-28 (FIM alone synchronises at K=0, λ ≥ 8,
increases Φ by 73 %, and is topology-universal). Set to 0.0 (default) to
disable.
3.2 run¶
def run(
duration: float,
dt: float = 0.001,
progress: bool = False,
backend: str = "auto",
spike_gating: bool = False,
) -> None
durationis in seconds;dtis the timestep in seconds.n_stepsisint(round(duration/dt)).backend∈{"auto", "python", "rust", "mpi"}."rust"raises if the engine wheel is not importable;"auto"silently falls back to Python.spike_gating(Python backend only) skips neurons with zero input current whose voltage is within 1 % of resting potential. Useful for sparse networks where most neurons are silent.
The Python loop (_run_python, network.py:165) is the reference
implementation; the Rust loop (_run_rust, network.py:123) round-trips
populations and projections through NetworkRunner.add_population /
add_projection, runs n_steps, then decodes packed spike events
(u64 = neuron_id<<32 | timestep) back into Python SpikeMonitor records.
3.3 Rust dispatch criteria (_can_use_rust)¶
Network._can_use_rust (line 80) returns True only when:
len(self.stimuli) == 0— noTimedArray/PoissonInput/StepCurrentin this network.- The Rust engine import succeeded (
_get_rust_engine()is notFalse). - Every
pop.model_name(or its*Neuron-stripped form) is inNetworkRunner.supported_models(). - No projection has a non-empty
plasticityfield.
When any of these fails and backend="auto", the network falls back to
Python without warning. Pass backend="python" explicitly when you need
deterministic dispatch.
3.4 Backend matrix¶
| Feature | Python | Rust | MPI |
|---|---|---|---|
| Heterogeneous neuron models | ✅ any | ⚠️ only supported_models() |
✅ per-rank |
| Stimuli | ✅ all | ❌ disqualifies Rust | ✅ rank 0 |
| STDP / plasticity | ✅ | ❌ disqualifies Rust | ✅ |
| Per-synapse delays | ✅ | ⚠️ uniform only via add_projection(... max_delay) |
✅ |
| Spike gating | ✅ | ❌ | ❌ |
FIM feedback (fim_lambda > 0) |
✅ | ❌ | ❌ |
| Multi-rank | ❌ | ❌ | ✅ |
4. Population — vectorised neurons¶
Population(model: type | str, n: int,
params: dict | None = None,
label: str | None = None)
model may be a class or a string name resolved through
sc_neurocore.neurons.models (lazy registry of 130 lazy-loaded classes —
see api/neurons.md). The constructor instantiates n independent neuron
objects with identical parameters and exposes:
population.neurons—list[NeuronProtocol]of lengthnpopulation.voltages— read-onlynp.ndarrayview (kept in sync via_sync_voltages)population.step_all(currents, spike_gating=False)— returns binary spike vectornp.ndarray[int8]of lengthnpopulation.reset_all()— callsreset()orreset_state()on each neuronpopulation.get_states()— collects all per-neuron state variables into arrays (usesget_state(),__dataclass_fields__, or falls back to["v"])population.set_voltages(arr)— sync voltages from an external source (used by the Rust round-trip)
Populations are hashable by identity (id(pop)); the Network keeps
pop_to_currents keyed by id(pop) so two Population objects with
identical parameters are still independent.
4.1 Spike gating¶
When spike_gating=True, step_all skips neurons that have no input current
and sit within 1 % of their resting potential. This makes per-step compute
roughly proportional to the active neuron count, which matters for
sparse networks where most neurons are silent for most of the simulation.
The skipped neurons do not advance — when the population is queried later,
their voltages are stale until they receive input again. Models that track
sub-threshold leak via internal calls to step() lose that leak during
skipped steps.
5. Projection — CSR connectivity + delays + STDP¶
Projection(
source: Population,
target: Population,
weight: float,
probability: float = 1.0,
delay: float | np.ndarray = 0.0,
topology: str | tuple[np.ndarray, np.ndarray, np.ndarray] = "random",
plasticity: str | None = None,
seed: int = 42,
weight_threshold: float = 0.0,
)
Stores connectivity in CSR (indptr, indices, data). Source spikes
propagate via _csr_matvec (no delay) or one of two delayed variants:
delay = 0.0— direct CSR matvec, no bufferingdelay = scalar— uniform axonal delay; output goes through a circular buffer of shape(steps, target.n)and is read out one timestep behinddelay = ndarrayof lengthn_synapses— per-synapse delay; source spike history is stored as a ring buffer of shape(max_delay+1, source.n)and each synapse reads fromspike_history[(hist_idx − d_k) % max_delay]
Per-synapse delays implement Hammouamri et al. 2023 (DCLS) and Masquelier's DelRec — learnable synaptic delays in spiking nets.
5.1 weight_threshold pruning¶
When weight_threshold > 0.0, the matvec skips synapses with
|data[k]| ≤ weight_threshold. Useful after sparse pruning to avoid wasted
multiplications. The branch is in the inner Python loop (projection.py:47)
so the speed-up is real but linear in connection count.
5.2 STDP plasticity¶
plasticity="stdp" activates trace-based STDP in update_plasticity
(projection.py:258):
- Pre/post traces decay with
tau(default 20 timesteps), incremented on spike - LTD on pre-spike:
data[k] -= a_minus * post_trace[j] - LTP on post-spike:
data[k] += a_plus * directional_bias * pre_trace[i] - Weights clipped at 0 (no sign change)
directional_bias scales a_plus per projection. The scpn-quantum-control
NB19 measurement of autonomic → cortical asymmetry suggests 1.36 for
bottom-up (sensory → higher-order) projections; default 1.0 keeps learning
symmetric.
Self-projections (source is target) additionally trigger
_enforce_symmetry after each STDP update — W_ij and W_ji are averaged.
Required because gradient/STDP updates break W = W^T after ~30 steps
(SPO Finding #7); asymmetric coupling hurts synchronisation by +12 %
(quantum-control NB24).
6. Topology generators¶
All generators in topology.py return a (indptr, indices, data) CSR
tuple ready to feed into Projection(..., topology=tuple).
| Generator | Algorithm | Cited basis | Symmetric? |
|---|---|---|---|
random_connectivity |
Erdős–Rényi | classic | no |
small_world |
Watts–Strogatz | Watts & Strogatz 1998 | yes (added both directions) |
scale_free |
Barabási–Albert preferential attachment | Barabási & Albert 1999 | yes |
ring_topology |
Ring + k nearest neighbours both directions | — | yes |
grid_topology |
2-D lattice within Manhattan radius | — | no (loops add both directions only when within radius) |
all_to_all |
Dense | — | yes when n_src == n_tgt |
6.1 Performance (this workstation, 2026-04-17)¶
Measured directly by calling each generator and timing with
time.perf_counter(). All inputs are deterministic seeds.
| Generator | n | args | wall (ms) | synapses |
|---|---|---|---|---|
random_connectivity |
200 | p=0.1 |
32.5 | 3 941 |
all_to_all |
200 | — | 11.7 | 40 000 |
small_world |
200 | k=8, p=0.1 |
1.6 | 1 600 |
scale_free |
200 | m=4 |
10.0 | 1 568 |
ring_topology |
200 | k=4 |
0.5 | 1 600 |
grid_topology |
196 (14²) | r=1 |
<0.01 | 1 404 |
random_connectivity |
1 000 | p=0.1 |
66.3 | 99 869 |
all_to_all |
1 000 | — | 472.2 | 1 000 000 |
small_world |
1 000 | k=8, p=0.1 |
26.0 | 8 000 |
scale_free |
1 000 | m=4 |
67.8 | 7 968 |
ring_topology |
1 000 | k=4 |
3.8 | 8 000 |
grid_topology |
961 (31²) | r=1 |
<0.01 | 7 320 |
small_world and scale_free use Python lists during construction; for
n ≥ 10 000 they become noticeably slow and are candidates for the planned
Rust path (task #13). grid_topology and ring_topology already vectorise
adequately for typical sizes.
7. Stimulus sources¶
| Class | Returns from get_current |
Notes |
|---|---|---|
TimedArray(values, dt) |
scalar float at min(t_step, len-1) |
clamps past the end |
PoissonInput(n, rate_hz, weight, dt, seed) |
np.ndarray[n] |
(rng.random < rate_hz·dt)·weight |
StepCurrent(onset, offset, amplitude) |
scalar float if onset ≤ t < offset else 0.0 |
rectangular |
Each stimulus carries a target: Population | None slot that the network
populates when adding the stimulus to a population (the convenience pattern
is to set stim.target = pop after construction). When target is None,
the network broadcasts to populations[0].
PoissonInput owns a np.random.default_rng(seed); runs are deterministic
when seeds are pinned. TimedArray and StepCurrent are deterministic by
construction.
8. Monitors¶
8.1 SpikeMonitor¶
Records (neuron_id, timestep) pairs. Two ingestion paths:
record(spikes, t)— accepts the binary spike vector emitted byPopulation.step_all(Python backend)record_event(neuron_id, t)— accepts a single decoded event (used by the Rust round-trip, which packsu64 = neuron_id<<32 | timestepfor efficient transport)
Read-out helpers:
| Property/method | Returns | Notes |
|---|---|---|
spike_times |
np.ndarray[int64] |
every spike's timestep |
spike_trains |
dict[int, np.ndarray] |
per-neuron timestep arrays |
count |
int |
total spikes recorded |
raster_data() |
(times, neuron_ids) |
tuple ready for raster plot |
firing_rates(n_steps, dt) |
np.ndarray[n] |
mean Hz per neuron |
isi(neuron) |
np.ndarray[int64] |
inter-spike intervals (timesteps) |
cross_correlation(i, j, max_lag) |
(corr, lags) |
delegates to analysis.spike_stats.cross_correlation |
8.2 StateMonitor¶
Captures population.get_states() snapshots at each call to snapshot(t).
Configure with variables: list[str] (default ["v"]) and an optional
record: list[int] to subset recorded neurons.
8.3 RateMonitor¶
Bins spike counts into fixed-duration windows (bin_ms), then converts
to per-neuron mean rate (Hz). The bin-completion check uses
steps_per_bin = max(1, int(bin_ms / 1000.0 / dt)), so a 10 ms bin at
dt=1 ms flushes every 10 steps.
9. Verilog export¶
export_verilog(network, output_dir, target="ice40") -> str emits a
top-level sc_network_top SystemVerilog module that wires one
sc_lif_array instance per population. Each population's parameters are
read from the first neuron in pop.neurons (v_threshold, v_reset,
tau) and converted to fixed-point by multiplying by 256 (Q8.8). The
allowed neuron model whitelist is _LIF_MODELS (17 LIF variants); any
non-LIF model raises SCHardwareError.
The exporter writes two files:
- sc_network_top.v — module wrapper with one pop_<i> instance per
population
- params.vh — \define POP__SIZE n` per population
Limitations:
- Only supports populations of LIF-family models
- Does not emit projections or topology — only neuron arrays
- Hard-codes Q8.8 scaling (see cli.md §9.1 for the related dt=0.001
underflow bug)
- Not test-covered in tests/test_network_*.py; add coverage when the LIF
network export is exercised end-to-end
For full equation → Verilog compilation including state-machine generation
and FPGA project files, use sc-neurocore compile (see api/cli.md) or
api/compiler.md.
10. MPIRunner (distributed)¶
mpi_runner.py provides round-robin partitioning of populations across MPI
ranks with MPI_Allgatherv spike exchange every timestep:
each rank r:
local_pops = {i for i in range(n_pops) if i % size == r}
for t in range(n_steps):
propagate local + cross-rank projections into local_currents
step local populations → local_spikes
Allgatherv local_spikes → all_spikes
if r == 0: feed monitors with all_spikes
mpi_runner.py:79 (_identify_cross_rank_projections) walks every
projection and tags it as local (source and target on the same rank) or
cross-rank. The implementation works with both Python and Rust
per-rank stepping, but currently dispatches via Population.step_all
(Python loop) regardless of backend selection.
10.1 Status¶
- 191 LOC, including a custom spike packing protocol
(
pop_index | n | spike_datapacked asint8blob) andAllgather+Allgathervchoreography. tests/test_mpi_runner.py(177 lines, 8 tests) covers MPIRunner via mockedmpi4py— partition correctness, RuntimeError when mpi4py is absent, single-rank end-to-end equivalence with the Python backend, cross-rank vs local projection routing, spike-exchange round-trip. Real multi-rank semantics withmpirun -n 2are NOT exercised; task #17 tracks adding apytest-mpi-style real test.- Does not implement spike gating, FIM feedback, or per-rank Rust dispatch.
11. Performance — Python backend (this workstation)¶
Measured via:
net = Network(pop, proj, mon, stim, seed=1)
net.run(duration=0.2, dt=0.001, backend="python")
with 200 timesteps, recurrent random connectivity at p=0.2, Poisson input
(500 Hz, w=2.0, seed=11), LapicqueNeuron populations (default
parameters: tau=20 ms, threshold=1.0). Hardware: Intel i5-11600K, 32 GB,
Python 3.12.3.
| Population n | Synapses | 200-step wall | steps / s | Recorded spikes |
|---|---|---|---|---|
| 50 | 489 | 13.9 ms | 14 428 | 185 |
| 200 | 7 911 | 56.6 ms | 3 532 | 782 |
| 500 | 49 570 | 193.8 ms | 1 032 | 2 385 |
| 1 000 | 200 283 | 854.8 ms | 234 | 7 464 |
Scaling is roughly O(n_synapses) because the Python _csr_matvec inner
loop dominates. step_all walks the population list one neuron at a time
(line 78: neuron.step(float(currents[i]))) which adds ~5 µs of Python
overhead per neuron per step. For dense recurrent networks at n ≥ 1000 the
Python backend rapidly becomes the bottleneck — the Rust backend exists for
exactly this case but requires the engine wheel installed (see §13).
11.1 Delay-mode cost (n=200, p=0.2, 200 steps)¶
| Mode | max_delay |
wall (ms) |
|---|---|---|
none |
0 | 114.0 |
uniform (5 steps) |
5 | 113.1 |
per_synapse (rand 1–7) |
7 | 637.0 |
Per-synapse delays are ~5.6× slower than no-delay or uniform-delay because
_csr_delayed_matvec walks every synapse without the early-exit
if x[i] == 0: continue of _csr_matvec. (_csr_delayed_matvec reads
spike history at varying offsets, so it cannot skip rows up front.) Use
uniform delay when the biological detail tolerates it.
12. Pipeline wiring¶
| Surface | How it's wired | Verifier |
|---|---|---|
from sc_neurocore.network import Network, Population, ... |
__init__.py re-exports 18 symbols |
tests/test_network_basic.py |
Population(model="LapicqueNeuron", n=...) |
resolves through sc_neurocore.neurons.models._CLASS_TO_MODULE |
Population._resolve_model (population.py:19) |
net.run(backend="rust") |
imports sc_neurocore_engine.NetworkRunner lazily |
_get_rust_engine (network.py:25) |
net.run(backend="mpi") |
imports mpi4py.MPI lazily, raises if absent |
_require_mpi (mpi_runner.py:36) |
Projection(plasticity="stdp") |
activates update_plasticity per timestep |
tested in test_network_basic.py |
export_verilog(net, dir) |
calls _check_exportable first |
raises SCHardwareError for non-LIF models |
Every public symbol terminates either in tested code or in an explicit runtime check. There are no orphan helpers.
13. Audit (7-point checklist)¶
| # | Dimension | Status | Detail |
|---|---|---|---|
| 1 | Pipeline wiring | ✅ PASS | All 18 public symbols wired; backend dispatcher complete |
| 2 | Multi-angle tests | ⚠️ WARN | Network tests cover the orchestrator, monitors, topology, cortical column, gamma circuit, and 12 mocked-mpi4py MPIRunner paths including per-rank Rust dispatch. Real multi-rank coverage is still missing (task #17). export.py is not directly covered. |
| 3 | Rust path | ⚠️ WARN | Network._run_rust and MPIRunner per-rank Rust dispatch exist and are tested logically; engine wheel not installed in this environment so empirical Rust numbers in §11 are not available. topology.py, _csr_matvec/_csr_delayed_matvec, update_plasticity are pure Python — task #13 tracks the Rustification. |
| 4 | Benchmarks | ✅ PASS | §6.1, §11, §11.1 measured this session. benchmarks/sc_network_benchmark.py exists (306 lines) but covers SC pipeline (encode/MAC/decode), not network orchestration — that gap is now filled by §11. |
| 5 | Performance docs | ✅ PASS | §11 + §6.1 + §11.1 |
| 6 | Documentation page | ✅ PASS | This page |
| 7 | Rules followed | ⚠️ WARN | SPDX headers on every source file ✅. gamma_oscillation.py:66-67 has # type: ignore[arg-type] without rationale (mirrors cli.py:298). British English in docstrings is mixed (vectorized, synchronization appear); see §14. |
Net: 3 WARN, 0 FAIL. Tracked follow-ups: tasks #10–#13.
14. Known issues & follow-ups¶
14.1 Two model fidelity violations (CRITICAL)¶
CorticalColumn and PINGCircuit ship in this directory but simplify
their cited publications in ways that break the no-simplifications rule:
CorticalColumncites Douglas & Martin 2004 + Potjans & Diesmann 2014; implements 5 of 8 populations (no L4i/L5i/L6i), 7 of 64 connections from the Binzegger matrix, no PSP kernel, no Poisson background input. Tracked: task #10.PINGCircuitcites Whittington et al. 1995 + Börgers & Kopell 2003; implements a mean-field rate approximation (population firing rate × scalar weight) instead of the spiking conductance-based PING with α-function synapses. Tracked: task #11.
Both will be documented in detail on their own pages once written
(api/cortical_column.md, api/gamma_oscillation.md); the audit findings
above are the canonical source until then.
14.2 Rustification gap¶
Topology generators and projection matvec / STDP are pure-Python loops.
For n ≥ 1000 dense or any per-synapse-delay setup, this is the dominant
cost. The Rust engine has NetworkRunner for the network loop but no
counterpart for the topology generators, and the add_projection call
takes a Python-side CSR tuple. Task #13 tracks closing this gap.
14.3 MPIRunner real multi-rank coverage missing¶
tests/test_mpi_runner.py covers 12 paths via mocked mpi4py, including
the NetworkRunner.step_population per-rank Rust dispatch contract. The
custom spike-packing protocol and Allgatherv choreography are not
exercised against real mpi4py + mpirun -n 2; a regression in real-MPI
buffer ordering or datatype matching would not be caught. Task #17
tracks adding a pytest-mpi-style real test.
14.4 American spellings in source docstrings¶
Docstrings in network.py, population.py, projection.py use
vectorized, synchronization, optimize etc. — should be British per
SHARED_CONTEXT.md. Not blocking; future cleanup.
14.5 # type: ignore[arg-type] without rationale¶
gamma_oscillation.py:66-67 (dataclass field defaults). Mirror of
cli.py:298. Should either type-correctly or annotate the reason.
15. Tests¶
PYTHONPATH=src python3 -m pytest \
tests/test_network_basic.py \
tests/test_network_coverage.py \
tests/test_network_monitors_stimulus.py \
tests/test_cortical_column.py \
tests/test_cortical_column_dynamics.py \
tests/test_gamma_oscillation.py \
tests/test_topology.py \
tests/test_topology_generators.py -q
# 87 passed in 2.26s (verified 2026-04-17)
What the existing tests cover:
test_network_basic.py— Network construction, add(), run() with each backend dispatch path, all monitor types, stimuli, plasticity flagtest_network_coverage.py— edge cases on_can_use_rust, FIM feedback, spike gating,_apply_plasticity/_apply_fimcorrectnesstest_network_monitors_stimulus.py— Spike/State/Rate monitor determinism, Poisson seed reproducibility, TimedArray clampingtest_topology.py,test_topology_generators.py— every generator's output shape, symmetry where claimed, deterministic seedingtest_cortical_column*.py—CorticalColumndynamics smoke tests (does NOT verify Potjans/Binzegger fidelity — see task #10)test_gamma_oscillation.py—PINGCircuitsmoke tests (does NOT verify spectral peak in 30–80 Hz band — see task #11)
What the existing tests do not cover:
MPIRunnerreal multi-rank semantics — 12 mocked-mpi4py tests exist; realmpirun -n 2coverage missing (task #17)export_verilog— no direct test; covered transitively by FPGA flow smoke tests at most- Performance regressions — no
pytest-benchmarkcases for the network loop; §11 numbers are point measurements
16. References¶
Network simulation engineering:
- Brette R., Rudolph M. et al. "Simulation of networks of spiking neurons: a review of tools and strategies." J Comp Neurosci 23:349-398 (2007).
- Eppler J. M. et al. "PyNEST: A convenient interface to the NEST simulator." Front Neuroinform 2:12 (2008).
Connectivity models:
- Watts D. J., Strogatz S. H. "Collective dynamics of small-world networks." Nature 393:440-442 (1998).
- Barabási A.-L., Albert R. "Emergence of scaling in random networks." Science 286:509-512 (1999).
- Erdős P., Rényi A. "On random graphs." Publicationes Mathematicae 6:290-297 (1959).
Synaptic delays / plasticity:
- Hammouamri I. et al. "Learning delays in spiking neural networks using dilated convolutions with learnable spacings." NeurIPS (2023).
- Bi G., Poo M. "Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type." J Neurosci 18:10464-10472 (1998).
MPI:
- Message Passing Interface Forum. MPI: A Message-Passing Interface Standard, Version 4.0 (2021).
Internal:
- CLI:
api/cli.md - Neuron registry:
api/neurons.md,api/neuron_models.md - Compiler:
api/compiler.md - Cortical column model + audit: planned
api/cortical_column.md - Gamma oscillation model + audit: planned
api/gamma_oscillation.md
17. Auto-rendered API¶
sc_neurocore.network
¶
Declarative network simulation engine for SC-NeuroCore.
HAS_MPI = True
module-attribute
¶
Network
¶
Declarative network: collects objects, runs the simulation loop.
Source code in src/sc_neurocore/network/network.py
| Python | |
|---|---|
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 | |
add(obj)
¶
Register a simulation object by type.
Source code in src/sc_neurocore/network/network.py
| Python | |
|---|---|
65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 | |
run(duration, dt=0.001, progress=False, backend='auto', spike_gating=False)
¶
Run the simulation for duration seconds at timestep dt.
backend selects execution: 'auto' picks Rust when available
and all models are supported, 'rust' forces the Rust backend
(raises if unavailable), 'python' forces pure-Python,
'mpi' runs MPI-distributed (requires mpi4py).
spike_gating: skip neurons with zero input and near-rest voltage. Makes compute roughly proportional to active neuron count. Python backend only.
Source code in src/sc_neurocore/network/network.py
| Python | |
|---|---|
92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | |
to_torch(surrogate_fn=None)
¶
Build an explicit differentiable bridge without altering NumPy/Rust execution.
The returned module accepts an input current tensor of shape
(T, batch, input_dim) and runs the graph with the same
previous-spike projection semantics used by the NumPy backend.
Source code in src/sc_neurocore/network/network.py
| Python | |
|---|---|
302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 | |
Population
¶
A group of N identical neurons with vectorized state access.
Source code in src/sc_neurocore/network/population.py
| Python | |
|---|---|
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | |
voltages
property
¶
Current membrane voltages (read-only view).
__init__(model, n, params=None, label=None)
¶
Create n neurons of model (class or string name).
Source code in src/sc_neurocore/network/population.py
| Python | |
|---|---|
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | |
step_all(currents, spike_gating=False)
¶
Advance all neurons one timestep; return binary spike vector.
If spike_gating is True, neurons with zero input current and voltage near rest (within 1% of threshold) are skipped. This makes compute roughly proportional to active neurons — useful for sparse-firing networks. Skipped neurons still decay via leak if their model tracks sub-threshold dynamics.
Source code in src/sc_neurocore/network/population.py
| Python | |
|---|---|
56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 | |
reset_all()
¶
Reset every neuron to its initial state.
Source code in src/sc_neurocore/network/population.py
| Python | |
|---|---|
84 85 86 87 88 89 90 91 | |
get_states()
¶
Collect all neuron states into arrays keyed by variable name.
Source code in src/sc_neurocore/network/population.py
| Python | |
|---|---|
93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 | |
set_voltages(voltages)
¶
Sync voltages from an external source (e.g. Rust backend) into neurons.
Source code in src/sc_neurocore/network/population.py
| Python | |
|---|---|
109 110 111 112 113 114 | |
Projection
¶
Synaptic projection from source to target population.
Parameters¶
delay : float, array-like, or 0 - 0: no delay (default) - scalar > 0: uniform axonal delay (all synapses share one delay) - 1-D array of length n_synapses: per-synapse delay in timesteps. Enables heterogeneous axonal/synaptic delays.
Source code in src/sc_neurocore/network/projection.py
| Python | |
|---|---|
130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 | |
n_synapses
property
¶
Number of synaptic connections.
delay_mode
property
¶
Delay mode: 'none', 'uniform', or 'per_synapse'.
max_delay
property
¶
Maximum delay in timesteps across all synapses.
__init__(source, target, weight, probability=1.0, delay=0.0, topology='random', plasticity=None, seed=42, weight_threshold=0.0)
¶
Create projection with CSR connectivity and optional delay/plasticity.
weight_threshold: skip synapses with |w| <= threshold during propagation. Set > 0 to exploit weight sparsity after pruning.
Source code in src/sc_neurocore/network/projection.py
| Python | |
|---|---|
150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 | |
propagate(source_spikes)
¶
Compute target currents from source spikes through CSR connectivity.
Handles three delay modes: - none: direct CSR matvec - uniform: aggregated current through circular buffer - per_synapse: each synapse reads from spike history at its own delay
Source code in src/sc_neurocore/network/projection.py
| Python | |
|---|---|
275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 | |
update_plasticity(src_spikes, tgt_spikes, a_plus=0.01, a_minus=0.012, tau=20.0, directional_bias=1.0)
¶
Trace-based STDP weight update.
directional_bias scales a_plus for this projection. Set to 1.36 for bottom-up (sensory → higher) projections per quantum-control NB19 (measured autonomic → cortical asymmetry = 0.36). Default 1.0 = symmetric learning.
Source code in src/sc_neurocore/network/projection.py
| Python | |
|---|---|
317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 | |
SpikeMonitor
¶
Records (neuron_idx, timestep) pairs from a population.
Source code in src/sc_neurocore/network/monitor.py
| Python | |
|---|---|
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 | |
spike_times
property
¶
All spike timesteps as 1-D array.
spike_trains
property
¶
Per-neuron spike timestep arrays.
count
property
¶
Total number of spikes recorded.
record(spikes, t_step)
¶
Store spike events for this timestep (from binary spike vector).
Source code in src/sc_neurocore/network/monitor.py
| Python | |
|---|---|
30 31 32 33 34 35 | |
record_event(neuron_id, t_step)
¶
Store a single spike event directly (from Rust backend).
Source code in src/sc_neurocore/network/monitor.py
| Python | |
|---|---|
37 38 39 40 | |
raster_data()
¶
Return (timesteps, neuron_ids) arrays for raster plots.
Source code in src/sc_neurocore/network/monitor.py
| Python | |
|---|---|
60 61 62 63 64 65 | |
firing_rates(n_steps, dt=0.001)
¶
Mean firing rate (Hz) per neuron over the simulation.
Source code in src/sc_neurocore/network/monitor.py
| Python | |
|---|---|
67 68 69 70 71 72 73 74 75 76 | |
isi(neuron)
¶
Inter-spike intervals (timestep units) for a single neuron.
Source code in src/sc_neurocore/network/monitor.py
| Python | |
|---|---|
78 79 80 81 82 83 84 | |
cross_correlation(i, j, max_lag=50)
¶
Cross-correlogram between neurons i and j.
Source code in src/sc_neurocore/network/monitor.py
| Python | |
|---|---|
86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 | |
StateMonitor
¶
Records state variable traces from a population.
Source code in src/sc_neurocore/network/monitor.py
| Python | |
|---|---|
104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 | |
traces
property
¶
Variable traces as {name: (n_steps, n_neurons)} arrays.
t
property
¶
Timestep array.
snapshot(t_step)
¶
Capture current state variables.
Source code in src/sc_neurocore/network/monitor.py
| Python | |
|---|---|
119 120 121 122 123 124 125 126 127 | |
RateMonitor
¶
Population firing rate in time bins.
Source code in src/sc_neurocore/network/monitor.py
| Python | |
|---|---|
140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 | |
rate
property
¶
Firing rate (Hz) per bin.
t
property
¶
Bin edge timestep array.
record(spikes, t_step, dt=0.001)
¶
Accumulate spikes; flush when a bin completes.
Source code in src/sc_neurocore/network/monitor.py
| Python | |
|---|---|
151 152 153 154 155 156 157 158 159 160 | |
TimedArray
¶
Time-varying current from a pre-computed array.
Source code in src/sc_neurocore/network/stimulus.py
| Python | |
|---|---|
21 22 23 24 25 26 27 28 29 30 31 32 | |
get_current(t_step)
¶
Return the value at timestep t_step (clamps to last value).
Source code in src/sc_neurocore/network/stimulus.py
| Python | |
|---|---|
29 30 31 32 | |
PoissonInput
¶
Random Poisson spike input producing weighted current.
Source code in src/sc_neurocore/network/stimulus.py
| Python | |
|---|---|
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | |
get_current(t_step, dt=None)
¶
Generate Poisson spikes and return weighted current vector.
Source code in src/sc_neurocore/network/stimulus.py
| Python | |
|---|---|
48 49 50 51 52 53 | |
StepCurrent
¶
Rectangular step current between onset and offset timesteps.
Source code in src/sc_neurocore/network/stimulus.py
| Python | |
|---|---|
56 57 58 59 60 61 62 63 64 65 66 67 68 69 | |
get_current(t_step, dt=0.001)
¶
Return amplitude if within [onset, offset), else 0.
Source code in src/sc_neurocore/network/stimulus.py
| Python | |
|---|---|
65 66 67 68 69 | |
MPIRunner
¶
MPI-distributed network simulation.
Partitions populations across MPI ranks via round-robin assignment.
Each rank steps only its local populations; spikes propagate via
MPI_Allgatherv every timestep.
Each rank steps supported local populations through the Rust engine's
step_population API when the extension is importable and every
local model on the rank is supported. Otherwise the runner falls back
to Population.step_all for CPU-only environments. spike_gating
and fim_lambda are unsupported by this runner — the
Network._run_mpi dispatcher raises NotImplementedError when
either is requested with backend='mpi'.
Source code in src/sc_neurocore/network/mpi_runner.py
| Python | |
|---|---|
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 | |
run(n_steps, dt=0.001)
¶
Run the distributed simulation for n_steps timesteps.
Results are recorded via the network's monitors. Global monitors aggregate on rank 0 only.
Source code in src/sc_neurocore/network/mpi_runner.py
| Python | |
|---|---|
222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 | |
random_connectivity(n_src, n_tgt, p, weight, seed=42)
¶
Erdos-Renyi random connectivity.
Source code in src/sc_neurocore/network/topology.py
| Python | |
|---|---|
61 62 63 64 65 66 67 | |
small_world(n, k, p_rewire, weight, seed=42)
¶
Watts-Strogatz small-world graph (n-by-n adjacency).
Source code in src/sc_neurocore/network/topology.py
| Python | |
|---|---|
70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 | |
scale_free(n, m, weight, seed=42)
¶
Barabasi-Albert preferential attachment (n-by-n adjacency).
Source code in src/sc_neurocore/network/topology.py
| Python | |
|---|---|
93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | |
ring_topology(n, k, weight)
¶
Ring topology with k nearest neighbours in each direction.
Source code in src/sc_neurocore/network/topology.py
| Python | |
|---|---|
123 124 125 126 127 128 129 130 131 132 133 134 135 136 | |
grid_topology(rows_count, cols_count, radius, weight)
¶
2D lattice connectivity within Manhattan radius.
Source code in src/sc_neurocore/network/topology.py
| Python | |
|---|---|
139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | |
all_to_all(n_src, n_tgt, weight)
¶
Full connectivity (every source to every target).
Source code in src/sc_neurocore/network/topology.py
| Python | |
|---|---|
161 162 163 164 165 166 | |
export_verilog(network, output_dir, target='ice40')
¶
Export a LIF-based network to Verilog files.
Source code in src/sc_neurocore/network/export.py
| Python | |
|---|---|
100 101 102 103 104 105 106 107 108 109 110 111 112 113 | |