SC-NeuroCore¶
Universal Stochastic Computing Framework for Neuromorphic Hardware
SC-NeuroCore provides a complete stack for building, simulating, and deploying stochastic computing (SC) neural networks — from individual neurons to full SCPN layer hierarchies, with both software simulation and Verilog RTL for FPGA deployment.
Version 3.14.0 | 3 376 passing Python tests (3 552 collected) + 378 Rust tests | 100% Coverage | 120 Neuron Models | 111-Model NetworkRunner | 29 Notebooks | PyPI | Rust Engine | GitHub
Train in PyTorch → Quantise to Q8.8 → Simulate with stochastic bitstreams → Compile to SystemVerilog → Synthesise for FPGA. The Rust SIMD engine accelerates all stages.
Key Features¶
- 122 neuron models — McCulloch-Pitts (1943) through ArcaneNeuron (2026), 9 hardware chip emulators, 9 AI-optimized
- 111 Rust neuron models — PyO3 bindings, 81-model NetworkRunner with Rayon parallelism
- ArcaneNeuron — flagship self-referential cognition model with 5 coupled subsystems (fast/working/deep/gate/predictor)
- Identity substrate — persistent spiking network with checkpointing, trace encoding/decoding, L16 Director control
- Network simulation — Population-Projection-Network with 3 backends (Python, Rust, MPI)
- MPI distributed — billion-neuron scale via mpi4py
- Model zoo — 10 pre-built configs, 3 pre-trained weight sets (MNIST, SHD, DVS)
- 125-function analysis toolkit — spike train stats, distance, correlation, causality, decoding (23 modules)
- 12 visualization plots — raster, voltage, ISI, PSD, cross-correlogram, and more
- 13 advanced plasticity rules — pair/triplet/voltage STDP, BCM, BPTT, TBPTT, EWC, e-prop, R-STDP, MAML, homeostatic, STP, structural
- 7 biological circuits — gap junctions, tripartite synapse (astrocyte), Rall dendrite, cortical column, lateral inhibition, WTA, gamma oscillation
- Packed bitwise layers — 64-bit vectorised AND/MUX/XNOR/NOT/CORDIV for high throughput
- Rust SIMD engine — 41.3 Gbit/s bitstream packing (AVX-512), AVX2/NEON/SVE/RVV dispatch
- GPU acceleration — PyTorch CUDA + CuPy backend + JAX JIT training
- SNN training — 7 surrogate gradients, 10 differentiable neuron cells (
nn.Module), SpikingNet + ConvSpikingNet,to_sc_weights()bridge to bitstreams - SCPN layer stack — 16-layer holonomic model (L1 Quantum → L16 Meta) with JAX acceleration
- Equation → Verilog compiler — arbitrary ODE string to synthesizable Q8.8 fixed-point RTL in one function call
- Verilog RTL — 19 synthesisable modules (incl. event-driven AER encoder/router/neuron), 7 formal verification files (67 properties), bit-exact co-simulation
- HDC/VSA — Hyper-dimensional computing for symbolic AI workloads
- NIR bridge — FPGA backend for NIR (18/18 primitives, recurrent edges, multi-port subgraphs)
- SC→quantum compiler — compile SC operations to quantum circuits, statevector + noisy simulation
- Predictive coding — zero-multiplication SC layer (XOR=error, popcount=magnitude)
- Topological observables — winding number, Ollivier-Ricci curvature, sheaf defect
- Phi* (IIT) — integrated information estimation for spiking networks
- Fault tolerance — SC vs fixed-point degradation benchmark, hardware-aware training
- SpikeInterface adapter — import experimental spike data (spike trains, sorting results)
- Adaptive bitstream length — Hoeffding/Chebyshev bounds for precision-speed tradeoff
- AXI-Stream + DMA — production hardware interface (stream, DMA, parameterized registers, CDC)
- ANN-to-SNN conversion —
convert()turns trained PyTorch ANNs into rate-coded SNNs with QCFS activation - Learnable delays —
DelayLinearwith trainable per-synapse delays via differentiable interpolation - One-command deploy —
sc-neurocore deploy model.nir --target artix7produces a bitstream-ready project - Mixed-precision SC — per-layer adaptive bitstream length (Hoeffding/sensitivity-based)
- Event-driven FPGA — AER encoder, event neuron, spike router (power proportional to spike rate)
- Neural data compression — WaveformCodec: 24x on raw 1024-channel electrode data (fits Bluetooth uplink). Plus 6 spike raster codecs (ISI+Huffman, 4-mode predictive, delta, streaming, AER) achieving 50-750x. Learnable world-model predictor. Rust backend (780x speedup). Bit-true Verilog
- conda-forge recipe — ready for conda-forge distribution
The default pip install sc-neurocore wheel ships the public
core/simulation/domain-bridge package surface under the sc-neurocore
product name. Frontier modules such as analysis, viz, audio,
dashboard, and swarm remain source-checkout features.
Quick Start¶
pip install sc-neurocore
For the 39–202× Rust SIMD engine (pre-built wheels for Linux/Windows/macOS):
pip install sc-neurocore-engine
When installed, SC-NeuroCore automatically uses Rust for NetworkRunner, E-I network simulation, batch model dispatch, and SIMD bitstream ops. Everything works without it — NumPy fallbacks are used.
from sc_neurocore import VectorizedSCLayer, BitstreamEncoder
layer = VectorizedSCLayer(n_inputs=8, n_neurons=4, length=1024)
output = layer.forward([0.3, 0.5, 0.7, 0.2, 0.8, 0.1, 0.6, 0.4])
print(output) # array of firing-rate probabilities
Architecture¶
| Tier | Modules | Ships in wheel |
|---|---|---|
| Core | neurons, synapses, layers, sources, utils, recorders, accel, compiler, hdl_gen, hardware | Yes |
| Simulation | hdc, solvers, transformers, learning, graphs, ensembles, export, pipeline, training | Yes |
| Domain bridges | quantum (Qiskit/PennyLane), adapters/holonomic (JAX), scpn (Petri nets) | Yes |
| Research | robotics, physics, bio, optics, chaos, sleep, interfaces | Source only |
| Frontier | analysis, viz, audio, dashboard, generative, world_model, swarm | Source only |
See Architecture for the full package map.
Tutorials¶
| Tutorial | Topic |
|---|---|
| SC Fundamentals | Bitstream encoding, arithmetic, noise analysis |
| Building Your First SNN | Neurons, synapses, layers, simulation |
| Surrogate Gradient Training | Train SNNs with backpropagation |
| Hyper-Dimensional Computing | Symbolic AI with high-dimensional vectors |
| FPGA in 20 Minutes | Train → quantise → synthesise → deploy |
| Rust Engine & Performance | SIMD tiers, GPU, benchmarking |
| Brunel Network Translation | Brian2 → SC conversion workflow |
| Spike Codec Library | 6 codecs for BCI, probes, neuromorphic, real-time |
Documentation¶
- Getting Started — Installation and first steps
- API Reference — Python package API
- Rust Engine API — High-performance Rust engine docs
- Hardware Guide — FPGA deployment workflow
- Benchmarks — Performance measurements
- For Research Labs — Setup guide for neuroscience, hardware, and ML labs
- Pricing — Free for research, commercial licenses available
Demo¶
See the Neuron Explorer Notebook for an interactive walkthrough of all 122 neuron models with voltage traces, phase portraits, and F-I curves. The NIR Bridge Notebook demonstrates importing NIR graphs and simulating spiking networks. Or try the Quickstart on Google Colab — no installation required.
Community & Ecosystem¶
SC-NeuroCore integrates with the NIR (Neuromorphic Intermediate Representation) ecosystem, connecting to Norse, snnTorch, Lava-DL, and hardware targets including BrainScaleS-2, Loihi, and SpiNNaker2. SC-NeuroCore adds the missing FPGA deployment backend via bit-true Verilog co-simulation.
Contact: protoscience@anulum.li | GitHub Discussions | www.anulum.li
SC-NeuroCore is developed by ANULUM / Fortis Studio