Changelog¶
All notable changes to the sc-neurocore project will be documented in this file.
[Unreleased]¶
Compiler Intelligence, Platform Registry, and Deployment (2026-05-01)¶
Added¶
- Expanded the hardware profile catalogue across FPGA, ASIC, neuromorphic, photonic, chiplet, PIM/CXL, rad-hard, edge AI, superconducting, spintronic, ferroelectric, mixed-signal, wafer-scale, acoustic, fluidic, biological, and molecular targets.
- Added constraint-derived hardware profile construction with TOML profile loading, directory loading, runtime platform hooks, and platform discovery.
- Added compiler intelligence for target recommendation, portability scoring, topology optimisation, heterogeneous dispatch, partial reconfiguration planning, multi-die floorplanning, CDC analysis, power-state generation, regression detection, compilation reporting, and caching.
- Added verification and safety utilities for equivalence sketches, ODE stability checks, testbench generation, fault-tree generation, compliance matrices, safety-certification evidence, formal CDC checks, and provenance chains.
- Added security, sovereignty, and compliance tooling for hardware-trojan linting, side-channel linting, SBOM generation, license-compliance checks, supply-chain risk scoring, IP obfuscation, netlist watermarking, bitstream encryption, and model checksums.
- Added power, thermal, reliability, and sustainability analysis for thermal envelopes, power intent, power-domain wrappers, energy schedules, carbon estimates, reliability prediction, SEU scrubbing, and HIL calibration.
- Added deployment and integration generators for AXI4-Lite, Wishbone, RISC-V drivers, RTOS templates, memory maps, DVS-to-AER bridges, debug probes, TCL projects, open-source FPGA flows, SymbiYosys scripts, IP-XACT packaging, and Cocotb/UVM testbenches.
- Added numerical and representation support for mixed precision, microscaling FP formats, IEEE FP8, posit arithmetic, auto-quantisation sweeps, photonic MZI encoding, PIM/CXL layout planning, analog noise modelling, and bit-true software kernels.
- Added frontier co-design helpers for NIR/ONNX-SNN import, photonic configuration export, chiplet/UCIe mapping, CXL mapping, on-chip learning parameter export, drift compensation, and digital-twin generation.
- Added documentation for compiler intelligence, frontier platforms, deployment, platform extensibility, multi-target deployment, safety certification, verification/debug flows, carbon sustainability, static analysis, SoC integration, and equation-to-Verilog workflows.
Removed¶
- Removed monolithic compiler intelligence and platform profile modules in favour of responsibility-scoped packages.
- Removed legacy delivery-scoped test entry points in favour of responsibility-scoped regression suites.
Security hardening (2026-04-29)¶
Added¶
- Property-based fuzz coverage for malformed bitstream/IR ports, Studio graph JSON, transfer checkpoints, NIR imports, model-zoo NPZ archives, SCPN datastream JSON, custom chip-spec JSON, HDL stochastic-source lowering, equation/MLIR lowering, and optimiser evidence JSON.
- Offline supply-chain audit command for committed CycloneDX SBOM and release
requirements metadata:
python tools/supply_chain_audit.py. - Hardware-install documentation now records Vivado
v2025.2as the current SHD/PYNQ evidence pin and marks OpenROAD PPA numbers as unpublished until the binary/container digest and PDK revision are recorded. - Packaging metadata now exposes
sc-neurocore[hdl], expandssc-neurocore[full]across CPU-side training, NIR, Studio, HDL, codec, bioware, and quantum workflows, and packages HDL/OpenROAD source artefacts. - Added an offline EDA toolchain version inventory helper for Vivado, OpenROAD, Yosys, nextpnr, IceStorm, Trellis, Quartus, Lattice tools, PYNQ, and OpenROAD/PDK pin metadata.
Fixed¶
- Hardened validation boundaries for fuzzed JSON, NPZ, NIR, IR, and HDL inputs before they reach parser, lowering, or hardware-resource paths.
- Documented the strict release-mode supply-chain gate in
SECURITY.md. - Aligned the CycloneDX SBOM root component version with
pyproject.tomlso strict supply-chain audit runs pass without metadata drift.
CI coverage restoration (2026-04-21)¶
Fixed¶
tools/ci_install_dev.pynow installsdev,nir,compression,training,research,bioware,studioso the 342 torch-gated tests (arcane_zenith,darts_sc_nas,advanced_plasticity, and the_nativebridges that hit thetorch.autograd.Functionpath) run inside the 3.10–3.14 matrix instead of being silently skipped.tests/test_analog_bridge/test_analog_bridge.py+test_analog_bridge_extended.pynow import throughsc_neurocore.analog_bridgerather than via asys.path.inserthack;coverage.pywas reporting 0 % foranalog_bridge.analog_bridgedespite the 27 tests executing every line.
Added¶
sc_neurocore.analog_bridgepackage root re-exportsAnalogBridge,AnalogSubstrateProfile,EventDrivenInterface,CalibrationRoutine,AEREventthrough__all__.tests/test_native/test_array_guards.py— 24 multi-angle tests forrequire_c_contiguouscovering happy path, dtype coercion, non-contiguous rejection, list / tuple conversion, the post-asarray defensive branch via__array__producers, alignment enforcement, and FFI integration byte ops. Module coverage 42 % → 100 %.- Two
unittest.mock.patch-based tests forCalibrationRoutine.effective_resolution_bitsfallback (max_err == 0andfull_range == 0); reachable branches not touched by the sweep-and-measure suite. Module coverage 99 % → 100 %.
evo_substrate: 4-backend whole-process industrial evolve runner (2026-04-20)¶
Added¶
crates/evo_substrate_core(new Rust crate, 1 227 LOC ofrunner.rs+ C-FFI + PyO3 extension) — port ofReplicationEngine.evolve_generation+ eleven industrial guards (TournamentSelector, AgeRegulator, FormalSafetyGuard, BloatPenalizer, ExtinctionDetector, HallOfFame, ParetoFront, LineageTracker, MutationEngine × 4 variants, CrossoverEngine, parametric FitnessEvaluator). Entry pointpy_evolve_run(config_json) -> str. Measured 72× speedup over the PythonReplicationEngineon 10-gen × 16-pop industrial runs (0.57 ms vs 40.88 ms).src/sc_neurocore/accel/julia/evo_substrate/evo_runner.jl(720 LOC) — same industrial loop in Julia 1.10+. JSON-in / JSON-out subprocess contract. Pinned deps viaProject.toml.src/sc_neurocore/accel/go/evo_substrate/runner.go(926 LOC) — same industrial loop in Go 1.22+. Shares the JSON contract.--runnerflag on the existingevo_substrate_benchbinary dispatches to it.src/sc_neurocore/accel/mojo/kernels/evo_runner.mojo(803 LOC) — same industrial loop in Mojo 0.26+. Uses Mojo's Python interop for JSON + SHA-256 at the I/O boundary; compute loop (mutation, fitness, tournament, Pareto, lineage, extinction) runs in pure Mojo.- Unified XorShift64 PRNG across all four backends (shift constants 13/7/17,
0xDEADBEEFCAFEBABEfallback for zero seeds) so the uniform-random sequence is byte-identical cross-language. Rust↔Julia full bit-exact parity on final genomes / lineage / Pareto; Rust↔Go & Rust↔Mojo agree on structural counters but drift ~1e-3 onbest_fitnessbecause Go + Mojolibmcos()/log()differ from Rust's libm at ~1 ULP and Box-Muller compounds that. - Hamming(7,4) encode / decode +
ScDoctor.adaptcontrol law added tocrates/stochastic_doctor_corewith PyO3 bridge (py_hamming74_encode,py_hamming74_decode,py_sc_doctor_adapt);src/sc_neurocore/debug/sc_doctor.pynow dispatches to Rust when the extension is importable (1.7× / 3.1× speedup on encode / decode;adaptslower via FFI at 276 ns due to dominant PyO3 overhead). Pure-Python fallback preserved bit-exact. sc_scope.compute_sccnow dispatches tostochastic_doctor_core.py_scc_packed(174× speedup over pure Python; bit-exact parity with fallback).- Cross-language parity test harness
tests/test_evo_substrate/test_multilang_parity.py(18 assertions) asserts Rust↔Julia byte-exact, Rust↔Go counter match + fitness tolerance, Rust↔Mojo schema match. - Per-backend unit tests: Julia 17 tests (
test_evo_runner.jl), Go 8 tests (runner_test.go), Mojo 7 side-validated tests (tests/test_evo_substrate/test_mojo_runner.py).
Documentation¶
docs/api/evo_substrate.md§7.3 — new whole-process runners section with entry-point table, measured 4-way parity matrix, honest timing breakdown per backend (Rust PyO3 warm 0.57 ms, Go execution 2 ms excluding ~3 sgo buildfirst time, Mojo cold ~1.1 s pixi + JIT + Python interop, Julia cold ~3 s JSON.jl + SHA.jl precompile, Python reference 40.88 ms), decision matrix for which backend to pick, and the 4-way test-suite invocation list.
Strategic module unification (2026-04-20)¶
Added¶
sc_neurocore.arcane_zenith.ArcaneZenithCognitiveCore— three-compartment ArcaneNeuron (fast / working / deep membrane states) coupled via attention gate + self-model predictor, wired to four reward-modulated plasticity rules via a sharpened sigmoid that maps weights into biological ranges fortau_deep,surprise_baseline,delta_conf,lr_base. Factorycreate_arcane_neuron_with_zenith_plasticity(backend=…), plusstep_from_bio_rates(MEA rate dict) andstep_from_genome(evo_substrate bridge). 32 multi-angle tests intests/test_arcane_zenith/.sc_neurocore.optics.photonic_emitter— full rewrite ofCrosstalkModel.analyze_bankon Marcatili coupled-mode theory (adjacent + next-nearest pairs); newanalyze_pairsfor O(N²) arbitrary geometry. Rust FFIpy_ph_analyze_crosstalk_bank/py_ph_analyze_crosstalk_pairs(with 4 cargo tests); Python fallback matches to 1e-9.FDTD2DSolversplit-field Berenger PML (Ezx + Ezy with σ-matched magnetic conductivity).CompilationResult.to_gdsiinow produces real GDSII viagdsfactory+klayout(PDK auto-activation,allow_duplicatecells, netlist string to GDS TEXT layer 63/0). 43 tests intests/test_optics/.sc_neurocore.biowareclosed-loop surface:BioHybridSession.process_framereturnsBioHybridFrameResult(typed dataclass with legacy mapping view —result["round"]+result.roundboth valid).SpikeSorterfit/assign with sklearn PCA+KMeans, no-op on empty input.HomeostaticPlasticity.update_thresholdQ8.8 proportional controller (error × α × 256, clamped to min/max). Newmea_fitness_hook— converts MEA spike dynamics to{accuracy, energy_mw, latency_ms}for evo_substrate'sReplicationEngine(metrics_fn=…). Matching PCA / Berenger / closed-loop regression tests added.sc_neurocore.accel.mojo.MojoKernelRunner+kernels.mojo— Mojo SIMD primitives (packed SC ops,sc_and/or/xor/mux/sub/not, pack/unpack,vec_mac,stdp_update,reward_modulated_stdp,hdc_bind). Pixi-managed toolchain;_HAS_MOJOflag never raises on missing tooling.benchmarks/bench_mojo_vs_rust.pypure-text side-by-side harness.sc_neurocore.edge.aer_router.AERRoutingDaemon— Python lifecycle wrapper for the Go AER UDP mesh router (accel/go/services/aer_router/main.go). Three sibling Go modules:hil_debugger(WebSocket telemetry),services/services_ext(service coordination). Each with its owngo.mod+main_test.go.sc_neurocore.debug.hil_server.HILServerDaemon+HILDebugger— lifecycle wrapper for the Go HIL debugger binary withGET /healthreadiness probe, 5 s timeout, SIGTERM → SIGKILL ladder.sc_neurocore.formal.FormalProofEngine— Lean 4 bridge.safety_bounds.leanproves six theorems (monitor_soundness,safe_transition,sc_precision_bound,sc_add_preserves_range,lif_membrane_bounded,correlation_range) mapped 1:1 toneuro_safe_monitor.svP-properties. Newsrc/sc_neurocore/formal/__init__.pyexports the engine.sc_neurocore.accel.julia.solvers.JuliaFusionSolver+ 4.jlscripts (fusion_solver,neuron_zoo,dynamical_analysis,spike_analysis) — reference continuous-time ODE solvers viaDifferentialEquations.jl(Tsit5).sc_neurocore.hdl_gen.safety.neuro_safe_monitor+tb_safety_monitor— SystemVerilog runtime safety monitor enforcing the six Lean theorems at nanosecond scale. Parameterised on Q8.8 current / voltage / coherence / SC denominator / LIF max.openroad_flow/run_asic_flow.shdrives Yosys synthesis (+ optional OpenROAD P&R) against the monitor with area / timing reports.sc_neurocore.evo_substrategained (documented in full):FormalSafetyGuard,BloatPenalizer,ExtinctionDetector,ComplexityTracker,CPPNGenome,ParetoFront,NoveltyArchive,HallOfFame,TileDeploymentTracker,ResourceBudget,LineageTracker,IslandModel. Bridged to MEA viamea_fitness_hookand to ArcaneZenith viastep_from_genome.sc_neurocore.proto—core.proto(Tensor, BitstreamMetadata) +telemetry.proto(HILFrame) as the wire contract for HIL debugging.- Plasticity-layer
reset()contract: new FFIreset_rule_layerinlibautonomous_learning(Rayon par_iter over rules), newWgpuRuleLayer::reset+reset_wgpu_layerFFI, andreset()methods onRustRuleLayer,RustWgpuRuleLayer,TorchRuleLayerwith per-rule trace-clearing scope matching the RustPlasticityRule::resettrait contract.ArcaneZenithCognitiveCore.reset()now works across all three backends. 11 new tests. - Example demos:
examples/14_bioware_closed_loop_demo.py(100-frame MEA ↔ ArcaneZenith closed loop),examples/15_photonic_compilation_demo.py(SC → MZI cascade → real GDSII),examples/16_evo_substrate_demo.py(genome → SC top-level module → Verilog emit).
Documentation¶
- New API pages:
docs/api/mojo_accel.md,docs/api/edge.md,docs/api/formal.md,docs/api/julia_solvers.md,docs/api/proto.md. - Upgraded from stubs:
docs/api/evo_substrate.md(23 → 155 lines),docs/api/debug.md(24 → 120 lines, added HIL section),docs/api/hdl_gen.md(17 → 100 lines, added safety-monitor P-property table + Lean mapping + ASIC flow). docs/api/bioware.mdupgraded from 14-line stub (fullBioHybridSession+BioHybridFrameResultdual-access + Q8.8 homeostatic controller + SpikeSorter + mea_fitness_hook sections).- New
docs/api/arcane_zenith.md+docs/api/optics.mdcompletely rewritten (photonic compiler + Berenger PML + Marcatili crosstalk + GDSII). mkdocs.ymlnavigation restructured: new Acceleration (Mojo + Julia), Formal + Safety, Edge + Wire Protocol groups under Frontiers.
Fixed¶
RustEligentLearner.stepFFI signature was missing thedtparameter (4 args passed, 5 expected) — every non-empty call raisedAttributeError. Addeddt: float = 0.001kwarg.sc_neurocore._native.learning_bridgeno longer raises at import time whenlibautonomous_learning.sois absent; returns_HAS_LEARNING = Falseso downstream imports succeed (the 398 previously-failing test collections now run).CI workflows(ci.yml,v3-engine.yml) now build theautonomous_learningcdylib and copy it intosrc/sc_neurocore/_native/before pytest runs — keeps the Rust path live.
Repository hygiene¶
- Untracked compiled Go bench binaries (
services_bench,services_ext_bench≈ 4.4 MB total) fromsrc/sc_neurocore/accel/go/services/…; pattern added to.gitignore(regenerate locally viago test -bench -c). - 22 ruff lint + format fixes across user-WIP modules (evo_substrate, mojo/runner, debug/hil_*, edge/aer_router, formal/lean_bridge).
ruff check src/ tests/andruff format --check src/ tests/clean. - New optional extras in
pyproject.toml:optics = ["gdsfactory>=9.0"],bioware = ["scikit-learn>=1.3"].
CorticalColumn full-scale (77 169 cells) verification (2026-04-19)¶
- Ran the canonical fidelity reference:
scale=1.0, seed=42, 600 ms simulation with the block + Rust batched multi-spmv path. 77 169 cells, build 298 s, sim 3 564 s ≈ 64 minutes wall. - 5/8 populations within 1.2× of Potjans Table 4 (L23i 1.07×, L4e 1.06×, L4i 1.09×, L6e 1.24×, L6i 1.05×). L5e 1.32×, L5i 1.22× plateau ~25 % over published — NOT purely a finite-size effect (does not collapse below 1.20× at full scale). L23e under-fires at 0.67× consistently across all four scales.
- Honest interpretation: the residual is a combination of (i) shorter analysis window than the published 5 s, (ii) dt-quantised global-bin delays vs the paper's per-connection continuous Gaussian, (iii) per-target multapse sampling vs NEST's
multapses=False(which we cannot trivially use without breaking van Albada 2015 in-degree preservation). The shape is faithful (population ordering, E/I balance, all rates finite and bounded); the absolute residual at ≤ 1.32× is the practical limit of the current architecture. - Doc page §4.1 now records all four scales side-by-side; the full-scale row is the canonical reference.
CorticalColumn full-scale convergence verified at scale=0.5 (2026-04-18)¶
- Ran
scale=0.5, seed=42, 600 ms simulation with the block + Rust batched multi-spmv path. 38 586 cells, build 116 s, sim 1 956 s (≈ 33 min wall). - 6/8 populations within 1.2× of Potjans Table 4 (vs 5/8 at scale=0.1, 5/8 at scale=0.2): L23i 1.00×, L4e 0.95×, L4i 1.07×, L5i 1.20×, L6i 1.04×.
- L5e shrinks 1.97× → 1.52× → 1.36×; L6e shrinks 2.81× → 2.43× → 1.68×. Both still residual but on the predicted convergence trajectory of van Albada et al. 2015 Fig 5.
- Confirms the finite-size hypothesis empirically: residuals collapse monotonically as scale grows, full-scale (~77 000 cells) would close to ≤ 1.05× across all populations. scale=0.5 / 600 ms is now reachable in 33 min wall, unblocked by the block + Rust path.
CorticalColumn batched multi-spmv Rust call (2026-04-18)¶
- New
engine/src/cortical_inject.rs::parallel_csr_multi_spmv_add— does2 × n_delay_bins(= 10) spmv add operations in ONE FFI call. Rust loops internally over the bins;par_chunks_mut(512)parallelism still applies, with the per-row kernel summing contributions from all bins before writing back. - New PyO3 wrapper
sc_neurocore_engine.py_parallel_csr_multi_spmv_addacceptingVec<PyReadonlyArray1>for indptrs / indices / data / xs. CorticalColumn._inject_block(dt)now batches all non-empty (E + I) bins into ONE FFI call when the multi-spmv kernel is available; falls back to per-block calls otherwise.- Bridge wrapper
bridge/sc_neurocore_engine/__init__.pyre-exportspy_parallel_csr_multi_spmv_add. - 1 new Rust unit test
test_multi_spmv_matches_sequentialproving batched output equals N sequentialparallel_csr_spmv_addcalls. - Measured perf at scale=0.1, 600 ms: 287.5 s wall — DOWN from 460 s (single-call Rust) and ON PAR with scipy per-pair (290 s). FFI overhead reduction (10 calls → 1) reclaimed the gap.
CorticalColumn Rust per-row-parallel CSR spmv kernel (2026-04-18)¶
- New
engine/src/cortical_inject.rs: rayon-parallel CSR sparse mat-vec add (y += W @ x) with row-chunking (CHUNK_SIZE = 512) so each task sees ~250 µs of work — well above rayon's per-iteration scheduler break-even point. 4 unit tests. - PyO3 wrapper
sc_neurocore_engine.py_parallel_csr_spmv_addre-exported viabridge/sc_neurocore_engine/__init__.py. CorticalColumn._inject_block(dt)now dispatches to the Rust kernel automatically when available (auto-detected via_HAS_RUST_CSR_SPMV). Bit-identical results vs scipy single-threaded — per-row reductions are local so parallel order does not affect output.- Pre-extracted
(indptr, indices, data)triples per block at construction (_block_e_arrays,_block_i_arrays) to dodge per-stepnp.ascontiguousarraycast overhead that otherwise eats the per-call Rust speedup. - Honest perf finding: Rust kernel measures 18.9 ms vs scipy 33 ms standalone (1.75× per call). In the full simulation pipeline at scale=0.1 / 600 ms, however, Rust takes 460 s vs scipy 290 s (per-pair) — a 1.6× regression. scipy's CSR mat-vec is already well-tuned for the in-pipeline access pattern (cache-warm matrices, sparse spike vectors); per-call Rust overhead + the surrounding Python concat / count_nonzero / slice work dominates.
- The Rust kernel is preserved as the right primitive for the future block-CSR / GPU / multi-node scale-up regime (where per-call FFI overhead shrinks relative to per-call work). Default per-pair scipy path is already the fastest Python-side measurement; Rust is opt-in via
use_block_csr=True.
CorticalColumn block-CSR opt-in path (2026-04-18)¶
- Added stacked block-CSR matrices keyed by
(source-type, global-bin-idx)so the per-step inner loop can collapse fromn_pairs × n_delay_bins(≈ 320 sparse mat-vecs) to2 × n_delay_bins(≈ 10). Bin centres are global, derived from theoretical Gaussian quantiles viascipy.stats.norm.ppf. - New
CorticalColumnparameteruse_block_csr: bool = False. When True, the construction builds block matrices alongside the per-pair representation;step()dispatches to_inject_block(dt). - Honest perf finding: at
scale=0.1, 300 ms sim, the block path measures 306 s vs ~145 s for the legacy per-pair path (≈ 2× SLOWER). scipy.sparse CSR mat-vec is compute-bound (FLOPs scale withnnz, identical between paths), and the per-pair tight inner loop wins on cache locality. The block path is preserved as an opt-in because it is the natural data layout for any future Rust / Mojo FFI port (10 FFI calls vs 320, where call overhead DOES dominate). - Default flipped to
use_block_csr=Falseso the as-shipped Python path stays on the fastest measured backend. - New
tests/test_cortical_column.py::TestConnectivity::test_block_csr_path_builds_and_runsexercises the opt-in path so it does not silently rot.
CorticalColumn finite-size verification at scale=0.2 (2026-04-18)¶
- Empirically verified that the L5e/L6e residual at
scale=0.1is a finite-size effect (van Albada et al. 2015 Fig 5), not a model bug. Scale=0.2 / 600 ms / seed=42 measurements:
| Pop | scale=0.1 ratio | scale=0.2 ratio | Δ |
|---|---|---|---|
| L23e | 0.67× | 0.27× | overshoots low |
| L23i | 1.19× | 0.94× | improving |
| L4e | 0.68× | 0.73× | stable |
| L4i | 1.21× | 1.08× | improving |
| L5e | 1.97× | 1.52× | -23 % |
| L5i | 1.50× | 1.27× | -15 % |
| L6e | 2.81× | 2.43× | -14 % |
| L6i | 1.24× | 1.10× | improving |
- The deep-layer residuals (L5e, L6e) shrink monotonically with scale; extrapolating linearly suggests scale=0.5 closes them to within 1.2-1.3× of Potjans Table 4. Closing all 8 populations to within 10 % requires full scale (~77 000 cells, ≈ 50 min/sec biotime). The implementation is faithful — the residual is intrinsic to sub-full-scale finite-size effects.
docs/api/cortical_column.md§4.1 now documents the per-scale ratios side-by-side with the historical baseline and the rejected no-multapse experiment.
CorticalColumn per-connection Gaussian delay distribution (2026-04-18)¶
network/cortical_column.pyadds per-connection delay binning. New constantsDELAY_E_SIGMA = 0.75 ms,DELAY_I_SIGMA = 0.4 ms(Potjans Table 5). New__init__parametersdelay_distribution: bool = Trueandn_delay_bins: int = 5. At construction time each (target, source) pair samplesK_per_target * n_tper-connection delays fromN(DELAY_*, sigma_*), quantile-bins them into 5 groups and stores one sub-CSR per bin. Perstep(), each pair contributes onedot()per bin, reading the source spike vector at that bin's delay offset.- Setting
delay_distribution=Falserestores the legacy single-mean-delay path for fast smoke tests and direct comparison. - Fidelity dramatically tightened. Measured at
scale=0.1, seed=42, 200 ms analysis window after 100 ms burn-in:
| Population | single-delay ratio | per-conn Gaussian ratio |
|---|---|---|
| L23e | 5.29× | 0.67× |
| L23i | 4.78× | 1.19× |
| L4e | 0.83× | 0.68× |
| L4i | 2.03× | 1.21× |
| L5e | 3.05× | 1.97× |
| L5i | 2.10× | 1.50× |
| L6e | 5.23× | 2.81× |
| L6i | 2.33× | 1.24× |
5/8 populations now sit within 1.2× of Potjans Table 4; the remaining 3 (L4e, L5e, L6e) within 2-3×.
- Cost: per-step ≈ 5× slower (5 sparse mat-vecs per pair instead of 1). At scale=0.1, sim wall went 32 s → ~290 s for 600 ms (matches 5× expectation).
- New tests/test_cortical_column.py::TestPublishedFidelity::test_per_connection_delays_tighten_rates — asserts ≥ 5/8 populations within [0.5, 1.5]× of published Table 4 values. Pins the win.
- benchmarks/bench_cortical_column.py now bench BOTH delay_distribution modes side-by-side.
- All 29 cortical_column tests pass with the new default (29 passed in 14:18 with delay distribution, 24 deselected-fidelity tests in 4:39 for fast iteration via -k 'not Fidelity').
PINGCircuit Rust acceleration backend (2026-04-18)¶
- New Rust per-step kernel
engine/src/ping.rswith PyO3 wrappersc_neurocore_engine.py_ping_step. Mirrors the Python step semantics (LIF + AMPA / GABA decays + drive + Wiener noise + refractory + spike detect + reset). Noise samples are drawn on the Python side and passed in asxi_e/xi_iso the per-instance RNG state evolves identically across both backends. - New
backend=parameter onPINGCircuit("auto" | "rust" | "python", default"auto")."rust"raisesRuntimeErrorif the kernel is not built;"auto"falls back to NumPy. - Bridge wrapper
bridge/sc_neurocore_engine/__init__.pyre-exportspy_ping_stepso pytest'sbridge/-on-sys.pathsetup sees the Rust symbol. - New
tests/test_gamma_oscillation.py::TestPythonRustParity(6 cases): per-population firing rates within 10 % across (80, 20) / (400, 100) / (1000, 250); dominant FFT peak within 1.5 Hz; explicitbackend="rust"smoke; invalid-backend rejection. Per-cell membrane V values drift at the float-noise level (NumPy SIMD/FMA vs Rust scalar ordering) — documented inline; aggregate dynamics match. benchmarks/bench_gamma_oscillation.pyextended to bench BOTH backends. Measured speedup: ~3.3-4.3× across the three workload sizes (per-step 145.8 → 33.7 µs at (80, 20); 588.3 → 178.3 µs at (4000, 1000)). All 6 runs stay in the published 30-80 Hz dominant band.engine/src/ping.rsships 3 Rust unit tests (no-drive silence; supra-threshold drive + refractory hold; deterministic for identical inputs). All pass oncargo test --release.
CorticalColumn no-multapse experiment — REJECTED (2026-04-18)¶
- Tried replacing the multapse-with-replacement adjacency builder with a vectorised
argpartitionno-multapse sampler (matching NESTmultapses=Falsedefault). Mean per-target weight is identical between the two approaches and per-target unique connectivity rises from ~63 % to 100 %. - Measured at
scale=0.1, seed=42, 600 ms: rates BLEW UP to refractory ceiling for 6 of 8 populations (L23e 90 Hz, L4e/L4i ≈ 410 Hz, L5e/L5i/L6i 260-390 Hz). Pre-experiment multapse-with-replacement gave rates 1.6-7.5× over Potjans Table 4 (within band, just inflated). Post-experiment no-multapse made the divergence ~10× worse. - Honest finding: at sub-full scale the deterministic per-target in-degree of the no-multapse path amplifies population synchrony in the heavy-recurrent regime (K approaches N_s for several pairs); the multapse path's natural variance dampens this. Documented inline next to the multapse sampler so future contributors don't repeat the experiment without first re-reading van Albada 2015 §3.
PINGCircuit scale-invariant weight normalisation (2026-04-18)¶
network/gamma_oscillation.py: per-spike conductance contributions are now divided by source population size at construction (_w_*_eff = w_* · default_size / actual_size). The default(80, 20)published weights stay bit-identical; larger circuits no longer drift out of the 30-80 Hz band.bench_gamma_oscillation.pynow reports 40.0 / 41.2 / 41.2 Hz across(80,20) / (400,100) / (4000,1000)— all in band — vs 40.0 / 103.8 / 76.2 before the fix. All 19 PINGCircuit tests still pass (default weights and behaviour unchanged at(80, 20)).
Honest benchmark scripts for network/ models (2026-04-18)¶
benchmarks/bench_cortical_column.py: 3-config wall-clock + per-population firing rates + Potjans Table 4 ratios forCorticalColumn. Replaces hand-measured numbers indocs/api/cortical_column.mdwith reproducible JSON output atbenchmarks/results/bench_cortical_column.json. Honest BLOCKED status reported per backend (Rust/Julia/Go/Mojo) perfeedback_no_fabricated_benchmarksandfeedback_module_standard_attnres.benchmarks/bench_gamma_oscillation.py: 3-workloadstep()wall-clock + dominant gamma frequency check (must lie in 30-80 Hz) forPINGCircuit. JSON output atbenchmarks/results/bench_gamma_oscillation.json. Documents the per-cell LIF + 4 conductance decays as a clean Rust + Mojo target (BLOCKED, tracked under multilang policy). Bench surfaces a real fidelity edge case atn_e=400, n_i=100(f_dom=103.8 Hz, outside published 30-80 Hz band) that the default-configuration test does not catch.docs/api/cortical_column.mdperformance table updated to reference the bench script and JSON path; numbers replaced with the measured values (build 0.04 / 2.04 / 4.07 s and per-step 0.96 / 2.07 / 5.29 ms across the three configurations).
Bandit MEDIUM triage (2026-04-18)¶
- 6 MEDIUM
B307findings (use ofeval) → ACCEPT with# nosec B307markers and inline rationale:equation_builder.pyEuler integrator, RK4 derivative eval, threshold expression and reset rule (4 sites);studio/analysis.pynullcline grid eval (2 sites). All sites are downstream ofEquationNeuron._validate_exprAST whitelist (_ALLOWED_AST_NODES+_BLOCKED_NAMESreject any escape vector beforecompile) with empty-__builtins__eval globals. - Re-running
bandit -r src/ -llreturns 0 findings. - 55 LOW findings remain (B101 asserts, B603/B404/B607 subprocess, B110 try/pass, B311 random); informational, no real impact, full inventory in
docs/internal/audit_bandit_2026-04-18.mdanddocs/internal/AUDIT_INDEX.md.
CorticalColumn Potjans & Diesmann 2014 (2026-04-18)¶
network/cortical_column.pyrewritten from 5-population canonical-microcircuit toy to the full 8-population Potjans & Diesmann 2014 model: L23e, L23i, L4e, L4i, L5e, L5i, L6e, L6i with per-population sizes from Table 5, the verbatim 8×8 connection-probability matrix from Table 5, per-cell background Poisson drive (K_bgper population,bg_rate=8 Hz), and exponentially decaying current-based PSCs (tau_syn=0.5 ms).- LIF integration:
C_m=250 pF,tau_m=10 ms,t_ref=2 ms,E_L=V_reset=-65 mV,V_th=-50 mV. Per-source delays:1.5 ms(E),0.8 ms(I), quantised todt. - Synaptic weights:
w_e=87.81 pA,w_i=-g·w_ewithg=4(configurable),w_l4_to_l23e=2·w_eper Potjans boost. - Sparse
scipy.sparse.csr_matrixadjacency per (target, source) pair with multapses sampled with replacement; full-scale in-degree preservation underscale_correction=True(van Albada et al. 2015 protocol). simulate(duration_ms, dt),step(dt),population_rates(rasters, dt, burn_in_ms),total_indegree(target)andreset_state()helpers.tests/test_cortical_column.pyrewritten: 29 tests covering smoke, determinism (per-instance RNG, global-seed leak-proofing), connectivity (Table 5 entries, K_bg, weight signs, L4e→L2/3e boost, sparse adjacency built per pair), and published fidelity (no silent populations, no refractory-ceiling saturation, E/I asymmetry, L4e in band, zero-background silence). 100 % coverage oncortical_column.py. Closes #10.docs/api/cortical_column.mdrewritten end-to-end (308 lines): published-reference summary, implementation overview (8 populations, sparse adjacency build, LIF + synapse + refractory, delay handling), public API reference, verification table vs Potjans Table 4 (L4e match within 1 %, other populations within 2-4×), performance table (4.6 s / 19.5 s / 43.6 s wall at scale 0.02 / 0.05 / 0.1) and reference list (Potjans 2014, van Albada 2015, Binzegger 2004, Hahne 2017, Douglas & Martin 2004).
PINGCircuit conductance-based gamma (2026-04-18)¶
network/gamma_oscillation.pyrewritten from rate-coded toy model to per-cell conductance-based Börgers-Kopell 2003 weak-PING. HH-style integrate-and-fire with separate AMPA / GABA exponentially decaying conductances, refractory window, per-cell drive jitter and stochastic kicks. Default parameters reproduce the published 30-80 Hz gamma peak (verified at 40 Hz at the default operating point).population_rate(spike_log, dt, bin_ms)anddominant_frequency(spike_log, dt, bin_ms, f_min, f_max)helpers added; FFT-based with empty-log + out-of-band silence handling.tests/test_gamma_oscillation.pyupdated to the new API: 19 tests covering smoke, determinism (per-instance RNG isolation, global-seed leak-proofing), published fidelity (30-80 Hz peak, gain-loop disengage paths, Hz units, silence handling). 100 % coverage ongamma_oscillation.py. Closes #11.- Replaced
np.sum(boolarray)withnp.count_nonzero(boolarray)in both implementation and tests to be reload-safe under coverage instrumentation (the_NoValuesentinel mismatch otherwise raisedTypeErrorfrom_methods.py).
Repository hygiene (2026-04-18)¶
- SPDX header format converted from 1-line piped to 2-line form across 2728 source files (.py / .jl / .rs / .go / .mojo). Closes #60.
microtubule_neuron.vEngineer attribution:Arcane Sapience.cargo clippy --release --lib: 20 in-source warnings → 0.- Bandit HIGH severity in
nas/sc_nas_engine.py:169→ 0 (hashlib.md5(..., usedforsecurity=False)). - Chiplet package coverage 95 % → 100 % (
test_hierarchical_partitioner_perf.py,test_chiplet_gen_edge_cases.py). tools/run_full_cov.sh: batched per-directory--cov-appendrunner. First full sweep completes at 43.81 % cumulative coverage; no OOM. Closes #58..gitignore:.agent_metadata.json.ruff,rustfmt: clean across all touched files.
Chiplet Partitioner — Multi-Language KL Refine (2026-04-18)¶
- Perf:
HierarchicalPartitioner.partitionV=200 went from 963 ms (pre-#65) → 12.7 ms (Python post-fix) → 0.04 ms (Mojo). Total wall-clock improvement at V=200: 24,000× across the chain. - #65 fix:
CorrelationAwareGraphnow caches(min, max) → edgelookup → O(1);_spectral_bisecthoistsset(vertices)out of the inner loop. 22-29× speedup at V=50/100/200. - #64-prep refine fix:
_per_partition_cost(v, n_parts, ...)returns the full length-P cost vector in ONE neighbour scan (was P redundant scans). Additional 2-9× over #65; bit-identical canonical output. - #74 multi-language KL refine: Rust (
engine/src/partition.rs), Julia (accel/julia/chiplet/kl_refine.jl), Go (accel/go/partition/partition.go), Mojo (accel/mojo/partition/partition.mojo) all wired intoHierarchicalPartitioner(refine_backend=...). Bit-exactpart_mapparity verified end-to-end via dispatcher tests on V=100. Empirical fastest-pick at V=1000: Mojo 0.20 ms (351×), Julia 0.26 ms (270×), Rust 0.29 ms (242×), Go 0.68 ms (103×), Python 70 ms. - Bench harness:
benchmarks/bench_kl_refine.pyruns 5 backends with parity check; results inbenchmarks/results/bench_kl_refine.json. - Tests: 218 chiplet tests (39 new this batch); coverage 99.58 % on the chiplet package, with
chiplet_gen.pyat 100 % andhierarchical_partitioner.pyat 99 %.
LGSSM Multi-Language Acceleration (2026-04-17)¶
- Mojo LGSSM Kalman filter (
accel/mojo/world_model/lgssm.mojo): hand-rolled matmul + Cholesky + triangular solve viamojo build --emit shared-lib. 46× over Python, 8× over Rust at T=200 d=4 p=3 workload. Closes #69. - Go LGSSM (
accel/go/lgssm/lgssm.go): cgo + ctypes shared lib, hand-rolled Cholesky. Closes #70. - Julia LGSSM (
accel/julia/world_model/predictive_model.jl): juliacall + LinearAlgebra LAPACK. Closes #68. - Rust LGSSM (
engine/src/lgssm.rs): PyO3 + ndarray Cholesky. Closes #67. - All 4 backends dispatched via
KalmanFilter.filter(backend='auto'|'rust'|'julia'|'go'|'mojo'|'python'); bit-exact parity vs Python at atol≤1e-9 on means/covs, ≤1e-7 on log-likelihood. - Mojo 0.26 FFI pattern proven: raw
Intaddress viaarr.ctypes.data+UnsafePointer[T, MutAnyOrigin](unsafe_from_address=addr)reconstruction inside the@exportbody works around the parametric-signature restriction. Same pattern reused for fault_injection + KL refine.
Fault Injection Multi-Language (2026-04-17)¶
- Rust + Julia + Go + Mojo kernels for the 5 fault models (
bitflip,stuck_at_0/1,dropout,gaussian). Mojo wins 4/5 boolean kernels (2.7-8.2× over NumPy); Julia wins Gaussian via Ziggurat randn. Bench harness with 4σ Binomial parity atbenchmarks/bench_kl_refine.py-style 5-backend layout.
Bench Harness Honest Exemptions (2026-04-17)¶
bench_safety_monitor.py+bench_chiplet.pynow emit abackendsblock in the JSON output documenting USED / EXEMPT / BLOCKED-ON-#X status per backend per op, with explicit FFI-vs-compute math instead of silent skipping.
Cross-Module Integration — (2026-04-16)¶
- Shared Core Types
core/types.py: unifiedHardwareBudget,ResourceReport,LayerSpec,estimate_network()— single source of truth for Optimizer↔NAS↔Runtime - Closed-Loop Adaptive Controller
control/adaptive_loop.py: Runtime drift detection → SA re-optimisation → newRuntimeConfig, configurable cooldown/threshold - Unified Energy Reporter
energy_accounting/unified_reporter.py: bridgesCarbonModel+ThermalModel+ ASIC power into singleanalyze()call - End-to-End Export Pipeline
export/pipeline.py: Model Zoo → ONNX → TVM Relay → MLIR/SSA → SystemVerilog in onerun()call - Rust Wiring:
sc_optimizer.py→optimizer.rsSA engine,sc_nas_engine.py→evo.rstournament selection,photonic_emitter.py→photonic.rscrosstalk analysis - Package Exports: Updated
core/__init__.py,control/__init__.py,export/__init__.pywith new module exports - Integration Tests: 20 new tests in
tests/test_integration/test_cross_module.pycovering all 5 actions - Maturin: Rebuilt
sc_neurocore_enginev3.14.0 with all Rust bindings - Total: 10,592 tests (8,895 Python + 1,697 Rust) — ALL GREEN
Extended Rust Wiring — QA & DNA Bridges (2026-04-17)¶
- Quantum Annealing:
bridges/quantum_annealing.py→py_qa_simulated_annealing(2,402× at 100 qubits) IsingModel.energy()→py_qa_ising_energy(Rust path for n>20 qubits)SimulatedAnnealer.solve_ising()→py_qa_simulated_annealing(467× at 20Q → 2,402× at 100Q)EnergyLandscape.analyze()→py_qa_batch_ising_energy(batch energy for >100 samples)- DNA Mapper:
bridges/dna_mapper.py— Rust engine loaded (_HAS_RUST_DNA) - Imported:
py_dna_design_sequence,py_dna_detect_hairpins,py_dna_check_cross_hybridization,py_dna_simulate_kinetics,py_dna_design_orthogonal_set - Photonic: Fixed
py_ph_analyze_crosstalkAPI (channel_ids, wavelengths, bandwidths, powers)
Python vs Rust Benchmarks — Integration Hot Paths (2026-04-16)¶
- SA Optimizer: 7× (5 layers) → 36× (20 layers) → 47× (50 layers)
- Tournament Selection: 337–394× (amortised per-round overhead elimination)
- Batch Mutate: 17–21× across population sizes 50–1000
- Population Diversity: 34–90× (O(N²) SIMD pairwise distance)
- Mean Rust speedup: 334.6× across all hot paths (incl. QA)
- Peak QA: 467× (20Q) → 1,426× (50Q) → 2,402× (100Q)
- E2E Pipeline: NAS→Optimizer→Energy→Verilog in 13.7ms (small) to 116ms (large)
- Criterion (Rust-native): spike_times=83ns, firing_rate=13ns, ISI=96ns, van_rossum=1.2µs (N=100)
- Results:
benchmarks/results/py_vs_rust_integration.json - Script:
benchmarks/py_vs_rust_benchmark.py
Cross-Language Acceleration — Spike Stats (2026-04-16)¶
- Crate
spike_stats_core(v0.1.0): 16 functions, 28 Rust tests, PyO3 + Criterion - Distance (7 fns):
victor_purpura_distance181×,spike_sync31×,hunter_milton27×,van_rossum,spike_distance,earth_movers_distance,multi_neuron_victor_purpura160× - Correlation (5 fns):
cross_correlation,event_synchronization,spike_time_tiling_coefficient,coincidence_index - Variability (4 fns):
approximate_entropy73×,sample_entropy78×,lempel_ziv_complexity69×,permutation_entropy65× - 99/99 Python tests pass on both Rust and Python fallback paths
- Python dispatch wired in:
distance.py,correlation.py,variability.py
Cross-Language Acceleration — Stochastic Doctor (2026-04-16)¶
- PyO3 bindings for
stochastic_doctor_corecrate:py_scc_bytes,py_scc_batch,py_precision_bytes,py_histogram,PyDriftDetector - Replaced legacy
ctypes.CDLLwith PyO3 import pattern (primary), Python fallback (secondary) SC_NEUROCORE_NO_RUST=1env var forces Python path- 16/16 Python tests pass on both Rust and Python paths
- 23 Rust tests pass
- Benchmarks (SCC single-pair): 35× at N=100, 3.5× at N=1M
- Benchmarks (batch SCC N×N): 15–18× for 4–64 neuron layers
- Benchmarks (precision): 5–14× across all sizes
- Criterion benchmarks:
crates/stochastic_doctor_core/benches/doctor_bench.rs - Python benchmark:
benchmarks/stochastic_doctor_benchmark.py - Results:
benchmarks/results/stochastic_doctor_py_vs_rust.json - API docs updated with full benchmark tables:
docs/api/stochastic_doctor.md
Module Integration — 19 Industrialized Modules (2026-04-16)¶
- Industrial tier: safety_cert (IEC 61508/ISO 26262, 81 tests), asic_flow (multi-PDK, 67 tests), fault_injection (radiation-grade, 22 tests), uvm_gen (UVM testbench, 71 tests)
- Exascale tier: hypervisor (multi-tenant, 78 tests), digital_twin/twinsync (time-warp sync, 72 tests)
- Substrates tier: spintronic (MTJ mapper, 66 tests), chiplet (UCIe/BoW, 94 tests), memristor (crossbar, 70 tests), analog_bridge (SC-to-analog, 27 tests)
- Frontiers tier: evo_substrate (self-replicating evolution, 91 tests), meta_plasticity (self-modifying rules, 72 tests), bioware (organoid interface, 79 tests), federated (DP-SGD, 93 tests), bci_studio (closed-loop BCI, 32 tests)
- Unification tier: explainability (causal attribution, 71 tests), neuro_symbolic (predictive coding, 34 tests), stochastic_doctor (bitstream diagnostics, 16 tests), model_zoo (auto-Verilog, 37 tests)
- All modules: SPDX dual-license headers,
__tier__classification,__init__.pywith docstrings - 19 MkDocs API doc pages with
mkdocstringsdirectives - Updated
mkdocs.ymlnav with 5 new categories (Industrial, Substrates, Exascale, Frontiers, Unification) - Integration reference:
docs/MODULE_INTEGRATION.md - Total: 1,173 new Python tests from integrated modules
Rust Workspace — 5 Research Crates Integrated (2026-04-16)¶
- Created
crates/directory for research Rust crates - Integrated: tinysc_riscv (83 tests), core_engine (22 tests), autonomous_learning (12 tests), neuro_symbolic (28 tests), stochastic_doctor_core (23 tests)
- Root
Cargo.tomlworkspace now has 6 members (engine + 5 research crates) - Engine (
sc_neurocore_engine, 1,549 tests) verified undamaged after workspace expansion - Total: 1,717 Rust tests across 6 crates
Evolutionary Substrate — (2026-04-16)¶
FormalSafetyGuard: pre-deployment safety validationCPPNGenome: Compositional Pattern Producing Network developmental encodingIslandModel: multi-deme evolution with migrationNoveltyArchive: k-NN behavioural novelty searchHWFitnessCollector: FPGA execution feedback for hardware-in-loop fitnessParetoFront: NSGA-II style non-dominated sortingTournamentSelector,AgeRegulator,BloatPenalizer,ExtinctionDetector,CoevolutionArenaEvoStatisticsTracker,ComplexityTracker,genome_diff(),shared_fitness()- Module grew from 657 to 1,400 LOC, 42 to 91 tests
Foundation-Model Neural Decoders (2026-04-07)¶
- POYODecoder: spike tokenisation + cross-attention (Azabou et al. 2023 NeurIPS)
- POSSMDecoder: diagonal SSM with HiPPO-LegS init (Ryoo et al. 2025 ICLR)
- NDT3Decoder: causal masked self-attention on binned spikes (Ye & Pandarinath 2025)
- CEBRAEncoder: InfoNCE contrastive embedding with analytical backprop (Schneider et al. 2023 Nature)
- Rust acceleration: tokenise_spikes, sinusoidal_position_encode, scaled_dot_product_attention, gaussian_attention, ssm_step_diagonal, infonce_loss (6 pub fn, 11 tests)
- PyO3: 5 functions registered
- Tests: 47 multi-angle tests
- Documentation: 976 lines, 8/8 sections
Transcriptomic Foundation Model Interfaces (2026-04-07)¶
- ScKGBERTInterface: dual S-Encoder + K-Encoder with Gaussian attention (Li et al. 2025 Genome Biology)
- GeneformerInterface: rank-value tokenisation + multi-head attention + MLM (Theodoris et al. 2023 Nature)
- rank_value_encode: shared utility for gene expression tokenisation
- Tests: 29 multi-angle tests
- Documentation: 1,118 lines, 8/8 sections
Gap Model Python + PyO3 + Docs (11 models, 2026-04-07)¶
- 10 new Python implementations (publication-exact): AdaptiveThresholdMoENeuron, HybridLinearAttentionNeuron, QuantumInspiredLIFNeuron, DendriticNMDANeuron, MulticompartmentMCNNeuron, AstrocyteLIFNeuron, DirectionSelectiveRGC, CochlearHairCell, ShortTermPlasticitySynapse, DopamineStdpSynapse
- PyO3 wiring: 11 models registered (2 macro + 9 manual wrappers)
- Tests: 87 multi-angle tests
- 10 docs (5,701 lines total)
- GPU backend documentation (607 lines)
CI & Dependency Fixes (2026-04-07)¶
- PEP 639: migrated
license = { text = "..." }→license = "AGPL-3.0-or-later"(fixes setuptools ≥78) - mypy: 1.19.1 → 1.20.0
- cyclonedx-bom: 7.2.2 → 7.3.0
- ci.yml: pinned all mypy stub dependencies to exact versions (CodeQL #287)
- cargo fmt: applied to all new Rust code
- Purged 52 resolved failed/cancelled CI runs
- Closed superseded dependabot PRs #53, #55
Neuron Models — (12 new models, 2026-04-04/05)¶
- TUMNetwork: rate model with short-term plasticity (depression + facilitation), 3 ODEs
- ElBoustaniNetwork: E/I + NMDA bistability, 3 ODEs
- GradedSynapseNeuron: non-spiking, passive RC + sigmoid release
- GapJunctionNeuron: LIF + electrical synapse with Cx36 rectification
- FrankenhaeUserHuxleyAxon: GHK permeability-based currents (not linear V-E)
- NodeOfRanvier: MRG 2002 — Nav1.6 transient + persistent + Kv7 slow K
- MyelinatedAxon: MRG node + passive internode cable
- CardiacPurkinjeFibre: DiFrancesco-Noble 1985, 6 currents
- SmoothMuscleCell: CaL + BK + IP3R/SERCA + Ca²⁺ store
- EndocrineBetaCell: CaL + K_dr + K_ATP + K_Ca glucose-dependent bursting
Fidelity Audit Fixes (7 models corrected, 2026-04-04)¶
- RetinalGanglionCell: basic LIF → Pillow 2005 GLM (stimulus + history filters)
- InnerHairCell: no vesicle pool → Meddis 1986/2006 (q/c/w compartments)
- OuterHairCell: unidirectional sigmoid → bidirectional asymmetric prestin (Santos-Sacchi 2006)
- GranuleCell: LIF-style → D'Angelo 2001 full HH (7 ionic currents)
- AlphaMotorNeuron: PIC no inactivation → h_pic + Ca²⁺ buffering
- RodPhotoreceptor: no Ca²⁺ feedback → Ca²⁺-GC feedback (Nikonov 2006, Hill n=4)
- TraubMilesNeuron: missing M-current → Kv7/KCNQ (Yamada 1989)
Kinetics Audit Fixes (3 models upgraded, 2026-04-05)¶
- GolgiCell (CRITICAL): 5-current WB → full Solinas 2007 (11 currents, 13 gating variables)
- DCNNeuron (MODERATE): added persistent Na (INaP) + Ca²⁺-dependent AHP (7 currents total)
- OlfactoryReceptorNeuron (MODERATE): added PDE4 negative feedback on cAMP
Infrastructure (2026-04-05)¶
supported_models(): 28 missing entries added (159 total)- Interface wrappers: 20 non-standard models wired via Wr* types (multi-input, i32-input, graded/rate)
- All 4 failing CI workflows fixed (clippy, ruff, MkDocs, typos)
cargo fmtapplied to all engine source- Fresh Criterion benchmarks published (2026-04-05)
- Documentation audit: all stale numbers corrected across README, pricing, index, benchmarks
Notebooks (13 new, 21 total)¶
- 08_equation_to_verilog: ODE string → Python sim → Q8.8 Verilog (LIF, FHN, Izhikevich)
- 09_topology_and_dynamics: 6 generators, adjacency matrices, degree distributions, raster plots
- 10_spike_train_analysis: ISI, CV, Fano, cross-correlation, van Rossum, PCA
- 11_biological_circuits: tripartite synapse Ca²⁺ dynamics, Rall dendrite nonlinearity
- 12_learning_rules: STDP, e-prop eligibility, R-STDP, STP facilitation/depression
- 13_quantisation_pipeline: float → Q8.8 → SC probabilities → Verilog export, error budget
- 14_sc_arithmetic_theory: AND=multiply, XNOR=bipolar, MUX=add, CORDIV=divide, Sobol vs Bernoulli convergence, Hoeffding bounds
- 15_fault_tolerance: SC vs fixed-point under bit-flips/stuck-at, TMR majority vote
- 16_neuron_atlas: 12 models from 8 families (LIF→ArcaneNeuron, 1907–2026)
- 17_reservoir_computing: liquid state machine, temporal XOR, ridge readout, SVD dimensionality
- 18_mixed_precision_sc: per-layer adaptive L, Hoeffding vs sensitivity allocation, Pareto frontier
- 19_compression_and_pruning: magnitude/SC-aware pruning, quantisation sweep, combined Pareto
- 20_power_analysis: event-driven vs clock-driven toggle count, scaling with network size
- 21_spike_alu: Turing-complete spike-based ALU — logic gates, SR latch register, ripple-carry adder, sort
- 22_ir_type_safety: IR signal type checker — Bitstream/Rate/Spike/Fixed, catch mismatches before Verilog synthesis
- 23_topological_observables: winding number, Ollivier-Ricci curvature, sheaf consistency defect, connection curvature
- 24_identity_lazarus: Lazarus checkpoint save/load/merge, TraceEncoder text→spikes, StateDecoder attractor extraction, DirectorController L16 self-regulation
- 25_cortical_column_dynamics: canonical 5-population microcircuit, thalamic drive, layer-resolved rasters, feedforward latency
- 26_spike_codec_benchmark: 5 codecs (ISI/AER/predictive/delta/streaming) on synthetic data, compression ratio vs density curves
- 27_python_to_proven_silicon: complete end-to-end pipeline — ODE string → Python sim → IR type check → Q8.8 Verilog → testbench → formal properties → resource estimate
- 28_domain_bridge: TensorStream prob↔bitstream↔quantum conversions, QuantumStochasticLayer cos²(θ/2) non-linearity, Born rule roundtrip
Tests (19 new files, ~3700 lines, ~310 test methods)¶
test_topology_generators.py: 6 generators — CSR validity, degree, symmetry, edge count, determinismtest_cordiv_division.py: CORDIV accuracy, monotonicity, convergence, adaptive_length Hoeffding boundstest_fault_injection.py: bit-flip degradation, stuck-at analytical bounds, TMR, SC vs fixed-point comparisontest_learning_advanced.py: EligibilityTrace decay, BPTT/TBPTT loss, R-STDP reward gating, STP facilitation/depression/recoverytest_quantisation_pipeline.py: Q8.8 roundtrip, dequantise fidelity, SC probability ordering, dot product end-to-endtest_network_monitors_stimulus.py: SpikeMonitor record/count/trains, StateMonitor accumulation, RateMonitor bins, TimedArray clamp, StepCurrent onset/offset, PoissonInput rate/seed/weighttest_neuron_families.py: parametrised test across 11 EquationNeuron models — step(), spike detection, reset, state finiteness, determinismtest_sc_convergence.py: AND O(1/√L), Sobol faster than Bernoulli, CORDIV monotonic, correlation violation, popcount exacttest_spike_alu.py: SpikeGate truth tables (AND/OR/NOT/NAND/XOR), De Morgan law, SpikeRegister roundtrip, SpikeALU add/sub/xor/compare/shift, spike_sort correctnesstest_topological_observables.py: winding number wraps, Ricci curvature complete>ring, sheaf defect zero when synchronised, connection curvature bounded by couplingtest_scpn_integrated.py: K_nm symmetric zero-diagonal, OMEGA_N physical frequencies, create_full_stack 16 layers, run_integrated_step finite, get_global_metricstest_identity_lazarus.py: IdentitySubstrate run/step/health, TraceEncoder encode/determinism, Checkpoint save/load/merge roundtrip, StateDecoder patterns/attractors, DirectorController monitor/diagnose/correcttest_cortical_column_dynamics.py: CorticalColumn step/run dict outputs, 5 populations, binary spikes, thalamic drive, L4-before-L5, inhibition, reset, determinismtest_codec_roundtrip.py: all 5 codecs parametrised — lossless roundtrip (sparse/empty/single-spike/all-ones), compression ratio bounds, shape preserved, edge cases (1 channel, 1 timestep)test_tensor_stream.py: TensorStream prob↔bitstream↔quantum roundtrips, Born rule, normalisation, p=0/1 edge cases, invalid conversion raisestest_quantum_hybrid.py: QuantumStochasticLayer cos²(θ/2) transfer, p=0→1, p=1→0, monotonic decreasing, multi-qubit independence
Model Validation¶
- LIF f-I curve: 29/29 tests, <5% error vs analytical solution
- Izhikevich 20 firing patterns: all from Izhikevich (2003) Table 1 validated
- Hodgkin-Huxley 1952: AP peak 40.6mV, spike width 1.46ms, AHP -75.1mV
- NeuroBench SHD: 79.28% test accuracy (250K params, feedforward)
- Brian2 parity: exact LIF match (0.000ms timing diff), 7.3x speedup
- 5 validation docs with measured data in
docs/validation/
Stochastic Computing Pipeline¶
- Bipolar SC (XNOR):
core/bipolar.pyfor signed weight multiplication - SC bitstream MNIST: 10% (unipolar) -> 35.6% (bipolar) -> 50.0% (all fixes)
- SC-aware training:
SCAwareLIFNetwith bitstream noise injection (+9.5pp)
Quantization-Aware Training¶
QuantizedLIFNet: 2/4/8/16-bit STE weight quantization (PyTorch)SCAwareLIFNet: SC noise injection during trainingSCAwareLinear: drop-in layer replacement
Encoding Comparison¶
- 7 temporal spike encodings benchmarked on MNIST
- Latency encoding Pareto-optimal: 88.1% at 142 spikes (17x fewer than rate)
Interoperability¶
- NeuroML 2 importer: iafCell, Izhikevich (2003/2007), AdEx
- SONATA network format importer: nodes.h5 + edges.h5, connectivity matrix
Reproducibility¶
- 7 Kaggle scripts in
notebooks/*_kaggle.py - JSON artifacts in
benchmarks/results/
[3.14.0] — 2026-03-27¶
Visual SNN Design Studio (Experimental)¶
- New feature: web-based IDE for designing, training, compiling, and deploying SNNs
- 118-model browser with live simulation, parameter sliders, pattern classification
- 20+ analysis views: trace, phase, ISI, f-I, bifurcation, heatmap, sensitivity, STA, frequency response, characterisation, multi-model overlay, A/B comparison
- Compiler Inspector: SC IR build/verify/emit, SystemVerilog generation, co-simulation
- Synthesis Dashboard: Yosys synthesis for 4 FPGA targets (ice40, ECP5, Gowin, Xilinx), multi-target comparison, resource estimation without Yosys
- Training Monitor: live SSE metric streaming, 6 surrogate gradients, per-layer spike rates, learnable beta/threshold
- Network Canvas: React Flow drag-and-drop populations and projections, NIR export/import
- Full pipeline: network graph → validate → simulate → compile → synthesise in one click
- Project save/load: persistent JSON workspaces on server
- E-I balanced network simulation with Rust engine fast path
- 140+ Studio-specific tests
- Documentation: 7 pages on GitHub Pages, 10-step quickstart tutorial
- Launch:
pip install sc-neurocore[studio] && sc-neurocore studio
Rust Engine¶
py_simulate_ei_network(): fused E-I network simulation (CSR + Poisson + Euler) in single Rust callpy_batch_simulate(): batch model simulation with NeuronVariant dispatch loopcreate_neuron()madepubfor reuse across lib.rs- 288 Rust tests passing
Performance¶
- Model list caching: first
/api/modelscall loads 118 models in ~1s, subsequent calls <1ms
Security¶
- 25 CodeQL "information exposure through exception" fixes — no tracebacks in HTTP responses
- 5 CodeQL "uncontrolled data in path expression" fixes — project name sanitisation
- DOMPurify XSS fix via npm override (>=3.3.2)
- Bandit: MD5 usedforsecurity=False, narrowed bare except clauses
CI¶
- Engine wheel publish job added to publish.yml (PyPI OIDC)
- Bridge ImportError restored for pytest.importorskip compatibility
- PnR added to typos dictionary
- tsconfig.tsbuildinfo gitignored
- uvicorn skip guard for studio optional extra
[Unreleased]¶
NIR Bridge¶
- Roundtrip tests for all 18/18 NIR primitives (was 7/18)
- Auto-broadcast scalar neuron params to input size (Norse/snnTorch export 0-dim tensors)
- Threshold fix:
>=to>matching NIR spec and snnTorch behavior reset_mode="subtract"for snnTorch compatibility (subtract-reset vs zero-reset)- IF subtract-reset test and unknown
reset_modefallback handling - Cross-framework interop tests: Sinabs LIF/IAF/ExpLeak, Rockpool LIF/CubaLIF/LI, snnTorch RSynaptic subgraph
- Cross-framework r-encoding test documenting per-framework dt conventions
- SpikingJelly NIR roundtrip demo (
examples/spikingjelly_nir_roundtrip.py) - Norse NIR roundtrip demo with real Norse weights (
examples/norse_nir_roundtrip.py) - NIR roundtrip demo: stronger input to produce visible spikes
- Documentation: added SpikingJelly, Rockpool, Sinabs, snnTorch RSynaptic sections to
docs/guides/nir_integration.md - Documentation: framework dt/r quick reference table
- Documented Norse tau observation (export/import roundtrip discrepancy in Norse code)
- Removed unverified "first FPGA backend" claim from 6 files
ANN-to-SNN Conversion Engine¶
sc_neurocore.conversion.convert(): automated PyTorch ANN to rate-coded SNN conversion- QCFS activation (Quantization-Clip-Floor-Shift): ReLU replacement for conversion-aware training
- Threshold normalization from calibration data activation statistics
ConvertedSNN.run()and.classify()for inference with Poisson rate coding
Learnable Delay Training¶
DelayLinear: PyTorch module with trainable per-synapse delays via linear interpolation- Differentiable delays: gradients flow through fractional delay positions
- Export to integer delays for hardware deployment via
delays_intandto_nir_delay_array() - DCLS (Dilated Convolutions with Learnable Spacings) principle for fully-connected SNN layers
One-Command FPGA Deploy¶
sc-neurocore deploy model.nir --target artix7: NIR/PyTorch → Verilog → project in one command- Target presets: ice40, ecp5 (Yosys Makefile), artix7, zynq (Vivado project.tcl)
- Copies 19 HDL library modules, generates neuron SystemVerilog, build script, README
Network Engine¶
- Per-synapse delays in Projection:
delay=arrayfor heterogeneous axonal/synaptic delays - Spike-gating:
Population.step_all(spike_gating=True)skips idle neurons, compute proportional to active count - Weight sparsity:
Projection(weight_threshold=0.01)skips near-zero synapses during propagation
Compiler¶
- Per-layer adaptive bitstream length:
assign_lengths()with Hoeffding or sensitivity-based allocation - Mixed-precision SC networks: shallow layers use short L (fast), deep layers use long L (precise)
Event-Driven FPGA RTL¶
sc_aer_encoder.v: spike vector → AER packets via priority encoder, idle neurons consume zero powersc_event_neuron.v: Q8.8 LIF that computes only on input events or periodic leak tickssc_aer_router.v: distributes AER events to target neurons using connectivity lookup table- Total HDL modules: 19 (was 16)
Performance¶
- Lazy-load 109 neuron models: import time 200s → 57s
- Deferred scipy imports (stats.qmc, sparse): import time 57s → 10s
Infrastructure¶
- Coverage fixes: test second model access, pragma Rust-only branch
- Coverage for lazy-load path, sparse guard mock path
- Ruff F401 re-export fixes, format vectorized_layer
[3.13.3] - 2026-03-20¶
SC Arithmetic¶
- CORDIV division circuit: Python
sc_divide()+ Verilogsc_cordiv.v(Li et al. 2014) - Adaptive bitstream length: Hoeffding/Chebyshev/variance bounds via
adaptive_length() - Sobol/Halton multi-dimensional decorrelation for per-synapse independent streams
- Chaotic RNG mode in BitstreamEncoder (logistic map)
- Sobol bitstream attention:
StochasticAttention.forward_bitstream()with LDS variance reduction
Learning Rules¶
- BCM metaplasticity with sliding threshold (Bienenstock-Cooper-Munro 1982)
- Voltage-based STDP (Clopath et al. 2010)
- Truncated BPTT for long sequences (
TBPTTLearner, Williams & Peng 1990) - EWC penalty implemented (was no-op stub) — Kirkpatrick et al. 2017
- Learnable beta/threshold on all 10 SNN cell types (ExpIF, AdEx, Lapicque, Alpha, SecondOrderLIF, IF, Synaptic)
- ConvSpikingNet now works with
train_epoch()viaflatten_input=False
Biological Circuits¶
- Tripartite synapse: astrocyte ↔ synapse bidirectional coupling (Araque et al. 1999)
- Rall branching dendrite: compartmental tree with 3/2 power rule
- Canonical cortical microcircuit: 5-population column (L2/3 exc/inh, L4, L5, L6)
- Astrocyte adapter:
AstrocyteNeuronwraps Li-Rinzel model for Population/Network
Theoretical Depth¶
- SC→quantum circuit compiler: Ry encoding, statevector simulator, layer compilation
- Zero-multiplication predictive coding SC layer (Conjecture C9: XOR=error, popcount=magnitude)
- Topological observables: winding number, Ollivier-Ricci curvature, sheaf defect
- Phi* integrated information estimation (Barrett & Seth 2011, IIT)
- Goldstone mode verification for Knm coupling spectrum
- Fault tolerance benchmark: SC vs fixed-point degradation curves
- Hardware-aware SC layer with memristive defect injection
- Noisy quantum simulation via HeronR2NoiseModel Kraus channels
NIR Bridge¶
- Recurrent edge handling via unit-delay insertion (LSTM-like feedback)
- Multi-port subgraph support (
SCMultiPortSubgraphNode)
Compiler¶
- IR type checker: Bitstream/Rate/Spike mismatch detection before emission
- SV/MLIR emission for GraphForward, SoftmaxAttention, KuramotoStep (was error stub)
- Weight quantizer exported in compiler
__init__.py
Hardware Stack¶
- AXI-Stream interface for bulk bitstream I/O (
sc_axis_interface.v) - DMA controller for weight upload and output readback (
sc_dma_controller.v) - Parameterized AXI-Lite register file (
sc_axil_cfg_param.v) - Clock domain crossing primitives: 2-FF sync, Gray counter, async FIFO (
sc_cdc_primitives.v) - NEON scalar-equivalence tests (13 tests for popcount, dot, max, sum, scale)
Infrastructure¶
- Rust engine wheel publishing in PyPI release workflow
- SpikeInterface/Neo adapter for experimental data import
- Static CycloneDX SBOM (v1.6)
- JAX autodiff fix: straight-through estimator for spike reset
- IIT added to typos allowlist
[3.13.2] - 2026-03-19¶
Equation → Verilog RTL Compiler¶
equation_compiler.py: compile anyEquationNeuronto synthesizable Q8.8 fixed-point Verilogequation_to_fpga(): one-liner from Brian2-style ODE string to Python neuron + Verilog RTL- AST-to-Verilog expression emitter handles +, -, , /, *, unary minus, comparisons
- Multi-variable ODE support (FitzHugh-Nagumo, Izhikevich, Hodgkin-Huxley)
- Threshold and reset logic auto-generated
NIR Bridge¶
nir_bridgepackage: import NIR graphs into SC-NeuroCore (FPGA backend for NIR)- Maps 11 NIR primitives (LIF, IF, LI, Integrator, Affine, Linear, Scale, Threshold, Flatten, Input, Output)
- Recursive graph parser with topological sort, fan-in summation, nested subgraph support
- NIR integration guide, API docs, notebook (05_nir_bridge.ipynb)
Packaging & Release¶
- Restored
sc-neurocoreas the only PyPI product package and removed the unintended runtime dependency on a separatesc-neurocore-enginepublish - Publish automation now pushes only
sc-neurocoreto PyPI while keeping the Rust engine on the existing crate / source / CI wheel paths - Tag pushes still trigger publish directly, so release creation no longer depends on a downstream
release.publishedevent
[3.13.1] - 2026-03-19¶
Packaging & Install¶
- Top-level
sc-neurocorenow requires the matchingsc-neurocore-enginerelease, andsc-neurocore inforeports engine version mismatches explicitly instead of silently mixing versions - Dense-layer example and getting-started/docs packaging guidance now match the current public API and distinguish wheel-shipped modules from source-only modules
NIR Bridge¶
- Nested NIR subgraphs now execute through a dedicated subgraph node wrapper and reset cleanly inside
SCNetwork Flattennow respectsstart_dim/end_dim, and bridge coverage is enforced instead of being omitted- Added regression coverage for nested graphs, fan-in, cycle detection, orphan nodes, flatten edge cases, and file-based import/export
CI & Release¶
- CI now builds and installs the local engine wheel before editable/package installs, so unreleased versions no longer fail dependency resolution
- Build smoke installs both the engine wheel and the top-level wheel from local artifacts
- Publish workflow now runs from tag pushes, builds engine sdist+wheels, publishes the engine package before
sc-neurocore, and keeps manual dispatch build-only unless publish is explicitly enabled - Release workflow now attaches both the pure-Python wheel and sdist to GitHub Releases
Bug Fixes¶
- StochasticTransformerBlock: clamp residual and FFN intermediate values to [0, 1] — MAC output from
VectorizedSCLayercan exceed 1.0, triggering the new input validation - Optional dependency introspection in
sc-neurocore infono longer crashes on broken NumPy/JAX imports
Tests¶
- Full preflight now passes at
2112 passed,38 skipped,12 xfailed, with100.00%coverage - Added audit validation tests for VectorizedSCLayer/EquationNeuron, CLI fallback coverage, dense-layer example smoke coverage, and expanded NIR bridge regressions
Documentation¶
- Replace stale black references with ruff format in
VALIDATION.mdandCONTRIBUTING.md - Sync the packaging/install docs with the released product surface
- Package naming and install guidance were corrected in
3.13.2;3.13.1incorrectly treatedsc-neurocore-engineas a separate PyPI runtime dependency
[3.13.0] - 2026-03-18¶
Python 3.14 Support¶
- CI test matrix, wheel builds, and publish workflow now include Python 3.14
- All 1 776 Python tests pass on 3.14; all dependencies compatible
- pyproject.toml classifier added
Bridge Wiring¶
- 12 missing Rust symbols exported from bridge
__init__.py: NetworkRunner, BitstreamAverager, Izhikevich, ArcaneNeuron, 8 AI-optimized models, ContinuousAttractorNeuron - Parity test name mapping for RustContinuousAttractorNeuron
CI Fixes¶
- Black formatting for identity/ files; pre-commit ruff upgraded v0.9.7 → v0.15.6
- Clippy: PopulationRunner::is_empty() added
- TraceEncoder: deterministic hash (byte-based, not Python hash())
- Synapse test tolerance widened for short bitstream noise
- Notebook trailing newline for end-of-file-fixer
- Removed deleted ruff rule UP038
Documentation¶
- JOSS paper rewrite: pipeline + spike raster figures, Availability section, McCulloch-Pitts/Hodgkin-Huxley citations, tightened to ~1200 words
- All docs synced: test counts (1 776/336), 111 NetworkRunner, 17 HDL, Python 3.14
- Neuron explorer notebook (04_neuron_explorer.ipynb): 5 sections, 117 models
Infrastructure¶
.gitattributes: eol=lf (suppress CRLF warnings on Windows)- Single-directory migration:
03_CODE/sc-neurocore/is canonical repo - PyPI deployment branch policy fixed (main added)
- 12 known Rust/Python parity divergences tracked as xfail
- 5 version-gate assertions updated
[3.12.0] - 2026-03-17¶
ArcaneNeuron + 8 AI-Optimized Models¶
- ArcaneNeuron: unified self-referential cognition model with 5 coupled subsystems (fast/working/deep/gate/predictor)
- 8 novel AI-optimized spiking neuron models: MultiTimescaleNeuron, AttentionGatedNeuron, PredictiveCodingNeuron, SelfReferentialNeuron, CompositionalBindingNeuron, DifferentiableSurrogateNeuron, ContinuousAttractorNeuron, MetaPlasticNeuron
- Total neuron count: 122 Python (113 bio + 9 AI), 111 Rust (including Arcane)
- ArcaneNeuron included in Rust NetworkRunner (111-model fused loop, was 80)
Identity Substrate¶
sc_neurocore.identitypackage: persistent spiking network for identity continuity- IdentitySubstrate: 3-population network (HH cortical + WB inhibitory + HR memory) with STDP
- TraceEncoder: LSH-based reasoning trace to spike pattern encoding
- StateDecoder: PCA + attractor extraction + priming context generation
- Checkpoint: Lazarus protocol save/restore/merge of complete network state (.npz)
- DirectorController: L16 cybernetic closure with monitor/diagnose/correct feedback loop
Network Simulation Engine¶
- Population-Projection-Network architecture with 3 backends: Python (NumPy), Rust (NetworkRunner), MPI (mpi4py)
- 6 topology generators: random, small-world, scale-free, ring, grid, all-to-all
- 12 visualization plots: raster, voltage, ISI, cross-correlogram, PSD, firing rate, phase portrait, population activity, instantaneous rate, spike train comparison, network graph, weight matrix
- 7 advanced plasticity rules: BPTT, e-prop, R-STDP, MAML, homeostatic, STP, structural
- MPI distributed simulation for billion-neuron scale via mpi4py
Rust NetworkRunner¶
- 111-model fused simulation loop with Rayon-parallel population stepping (was 80)
- CSR-sparse projection propagation
- Scales to 100K+ neurons with near-linear speedup
Model Zoo¶
- 10 pre-built network configurations: Brunel balanced, cortical column, CPG, decision-making, working memory, visual cortex V1, auditory processing, MNIST classifier, SHD speech, DVS gesture
- 3 pre-trained weight sets: MNIST (784-128-10), SHD (700-256-20), DVS gesture (256-256-11)
conda-forge¶
- Recipe ready for conda-forge distribution
Analysis Toolkit¶
- 126 spike train analysis functions across 23 modules (22 spike_stats + 1 explainability)
- Covers: basic stats, variability, rate estimation, distance metrics, correlation, spectral, temporal, stimulus, LFP coupling, surrogates, information theory, causality, dimensionality, decoding, network, point process, sorting quality, waveform, statistics, patterns, SPADE, GPFA
- Pure NumPy, zero external dependencies
- Tests: 1 776 Python total, 336 Rust total
Neuron Model Library (122 Python / 111 Rust)¶
- 108 individual model files in
neurons/models/(one file per model) - 108 individual model files across 14 families: IF variants, Biophysical, Adaptive, Oscillatory, Bursting, Synaptic, Multi-compartment, Map-based, Stochastic, Population, Hardware, Modern/ML, Rate, Other
- Notable additions: TraubMiles, WilsonHR, Pospischil (5 cortical types), ConnorStevens, WangBuzsaki, PinskyRinzel, Destexhe, HuberBraun, GolombFS, MainenSejnowski
- Historical coverage from McCulloch-Pitts (1943) to Gated LIF (2022)
- 10 PyTorch training cells: LIF, IF, Synaptic, ALIF, RecurrentLIF, ExpIF, AdEx, Lapicque, Alpha, SecondOrderLIF
MNIST 99.49% Accuracy¶
examples/mnist_conv_train.py— ConvSpikingNet with learnable beta/threshold- Architecture: Conv(1->32)->LIF->Pool->Conv(32->64)->LIF->Pool->FC->LIF->FC->LIF
- Techniques: FastSigmoid surrogate, cosine LR schedule, data augmentation, membrane readout
- Trained on RTX 6000, 30 epochs, 25 minutes
- Model checkpoint:
examples/mnist_conv_train/results/conv_spiking_net_best.pt
Intel Lava/Loihi Bridge¶
integrations/lava_bridge.py— SCtoLavaConverter, export_weights_loihi- SCDenseProcess + PySCDenseModel for Lava CPU simulation
- Weight conversion: SC probability [0,1] -> Loihi fixed-point
Rust Engine parity expansion (v3.8/v3.9 carry-forward)¶
- Sobol bitstream (M1): Gray-code Sobol quasi-random encoder in Rust (
sobol.rs) - HomeostaticLIF: adaptive threshold neuron with EMA spike rate tracking
- DendriticNeuron: XOR-nonlinearity compartmental model
- RewardStdpSynapse: eligibility trace + reward-modulated STDP
- Conv2DLayer: im2col + SC multiply-accumulate convolution
- RecurrentLayer: echo state network with state feedback
- LearningLayer: online STDP-integrated dense layer
- FusionLayer: weighted stochastic multiplexing across modalities
- MemristiveLayer: dense layer with stuck-at faults and write noise
- SpikeRecorder: buffered spike recording with firing rate and ISI stats
- ConnectomeGenerator: Watts-Strogatz and Barabási-Albert topology generators
- FaultInjector: bit-flip and stuck-at fault injection on packed bitstreams
- MLIR emitter: CIRCT hw/comb dialect IR emission (
ir/emit_mlir.rs) - Static synapse: completed with excitatory/inhibitory polarity
- Surrogate gradient: added Triangular and PiecewiseLinear variants
- Rust neuron models callable from Python: 111 (of 122 Python total)
SIMD Hardening (v3.8 carry-forward)¶
- Fused
softmax_inplace_f64_dispatchwith SIMD max/sum/scale - Hamming distance dispatch for all backends (AVX2, SVE, RVV)
- SVE/RVV softmax portable fallbacks
- Attention softmax refactored to use fused dispatch
Quantum Backend Stabilisation (v3.9 carry-forward)¶
- IBM Heron r2 noise model: depolarizing, amplitude/phase damping, readout asymmetry
- Parameter-shift gradient rule for variational quantum circuits
- Hybrid quantum-classical VQE pipeline with scipy optimizer
- QEC noise integration with surface code threshold comparison
Holonomic Adapter Ecosystem (v3.9 carry-forward)¶
- L1-L16 adapters registered in ComponentRegistry with
create_adapter()factory - Per-adapter benchmark suite: latency, memory, throughput (with/without JAX JIT)
- Plugin discovery via
importlib.metadataentry points
Type Safety Cleanup (M2)¶
- Removed 235 unnecessary
type: ignorecomments (260 -> 25) - Remaining 25 are justified: CuPy type aliases, optional imports, private method access
GPU SNN Training with Surrogate Gradients¶
sc_neurocore.training— PyTorch-based differentiable SNN training module- 3 surrogate gradient functions: FastSigmoid (Zenke 2018), SuperSpike (Zenke 2021), ATan (Fang 2021)
LIFCell,RecurrentLIFCell—nn.ModuleLIF neurons with autograd through spikesSpikingNet— multi-layer feedforward SNN with spike-count and membrane readoutto_sc_weights()— export trained float weights to [0,1] range for SC bitstream deployment- 3 loss functions: spike count cross-entropy, membrane cross-entropy, spike rate MSE
train_epoch()/evaluate()— training loops with temporal unrollingexamples/mnist_surrogate/train.py— MNIST benchmark (~95% accuracy, 10 epochs)- 31 tests covering surrogates, modules, and training loops
- Requires
pip install sc-neurocore[training]orsc-neurocore[research]
[3.10.0] - 2026-03-09¶
MNIST-on-FPGA Demo¶
- End-to-end pipeline:
examples/mnist_fpga/demo.py— train (sklearn digits), PCA 64→16, quantise Q8.8, stochastic computing inference, Verilog weight export - Float 94.2%, Q8.8 94.2%, SC 94.0% (L=1024, sign-magnitude encoding)
- Resource estimate: 16→10 config = ~56K LUTs (fits Artix-7 100T)
hdl/sc_dense_matrix_layer.v— per-neuron weight dense layer for classification
Vivado Tooling¶
tools/vivado_impl.tcl— non-project flow: synth → place → route (250 MHz default)tools/vivado_report.py— parse timing/utilization/power reports to JSON
Tutorial¶
docs/tutorials/fpga_in_20_minutes.md— 6-section FPGA deployment tutorial
Paper¶
- JOSS paper updated to submission-ready state (
paper/paper.md) - 12 references with DOIs, MNIST demo results, Brian2 comparison, formal verification
Documentation Overhaul¶
- README: benchmarks section (Rust SIMD, Brian2 comparison, Yosys synthesis)
- README: all 10 HDL modules listed with descriptions
- Zenodo DOI updated to 10.5281/zenodo.18906614
- CITATION.cff, .zenodo.json: DOI, version, author corrections
- CONTRIBUTING.md, VALIDATION.md, getting-started.md: test counts, Python version
- Yosys MODULES list updated (10 modules)
Fixes¶
- Zenodo author list corrected (sole author: Miroslav Šotek)
- DOI badge in README points to latest Zenodo record
[3.9.1] - 2026-03-08¶
Benchmarks¶
- 20-variant Brunel translator suite: comprehensive characterization of SC-NeuroCore against Brian2 across neuron models (LIF, Izhikevich, homeostatic), timing variants, synapse types (STDP, dot product, Sobol bitstream), layer architectures (JAX, recurrent, memristive), and acceleration backends (Numba JIT, PyTorch CUDA GTX 1060, vectorized NumPy)
- V18 Numba JIT: 9.5× speedup over per-neuron Python loop
- V19 PyTorch CUDA: 8.7× speedup on GTX 1060 6GB
- V14 Sobol bitstream: 1.04× Brian2 ratio (closest match)
- 19 translator unit tests (
test_brunel_translator.py) - Fix BENCHMARKS.md CPU: i5-11600K @ 3.9 GHz (AVX-512, DL Boost)
- Fix 3 delta-PSC wiring bugs: v_reset omission, RIdt dilution, Poisson-as-current
- Comprehensive BENCHMARKS.md with 13+ sections and measured numbers
- Rust Criterion: 31 benchmarks captured (AVX-512)
- Brian2 2.10.1 SNN comparison: Brunel balanced network head-to-head
- NeuroBench-aligned metrics: 4 configurations, up to 847 MOP/s
- v2 vs v3 PyO3 speedup: 7.3× on large dense forward (128→64)
- Advanced module benchmarks: quantum hybrid, GNN, S-Former, BCI, DVS, chaos RNG
- Yosys synthesis tooling (
tools/yosys_synth.py,tools/yosys_synth.tcl) - CuPy 14.0.1 installed for GPU VectorizedSCLayer
Paper¶
- Updated JOSS paper with measured Criterion numbers (41.3 Gbit/s pack, 224 Mstep/s LIF)
- Replaced estimated FPGA claim with Yosys tooling reference
[3.9.0] - 2026-03-06¶
SCPN Layers¶
- L8-L16 pure NumPy layers: 9 new layer files completing the full 16-layer SCPN stack (
scpn/layers/l8_phase_field.pythroughl16_director.py) - 16-layer registry:
LAYER_REGISTRYdict,create_full_stack()now returns all 16 layers - Full integrated step:
run_integrated_step()chains L1→L16 with inter-layer coupling
Quantum Error Correction¶
- SurfaceCodeShield: d=3 rotated surface code with X/Z stabilizers, syndrome measurement, lookup-table decoding — corrects arbitrary single-qubit errors
- Extensible to d=5 (encode/decode/syndrome paths support arbitrary odd distance)
Benchmarks¶
- Fixed double-step bug in
benchmarks/snn_comparison.py(neurons were advanced twice per timestep) - Fixed Lava stub notes (requires Loihi 2 hardware)
- Fixed
benchmark_suite.pyoutput path →benchmarks/results/ - SNN comparison results recorded in
docs/benchmarks/BENCHMARKS.md
Formal Verification¶
- LIF neuron:
hdl/formal/sc_lif_neuron.sby+sc_lif_neuron_formal.v— 5 properties (reset, spike-reset, refractory clamp, counter bound, spike reachability) - Bitstream synapse:
hdl/formal/sc_bitstream_synapse.sby+sc_bitstream_synapse_formal.v— 4 properties (AND correctness, zero propagation, full-high, input coverage)
Testing¶
- 6 cross-layer coupling integration tests (
test_scpn_cross_layer.py) - 9 surface code QEC tests (
test_qec_surface.py) - Test count: 945 → 960+
Documentation¶
- JOSS paper: updated test count (960), qualified LUT claim, added Brunel/NeuroBench/LFSR bib entries
[3.8.2] - 2026-03-06¶
Documentation & Adoption¶
- BENCHMARKS.md: Populated with 14 real benchmark entries (i5-11600K, NumPy 1.26.4), Rust engine Criterion numbers, comparison context, reproduction instructions
- JOSS paper draft:
paper/paper.md+paper.bib(6 references) — statement of need, architecture, key features, QA - End-to-end notebook:
notebooks/03_end_to_end_pipeline.ipynb— 7-cell walkthrough (encode→synapse→neuron→VectorizedSCLayer→accuracy analysis)
Testing¶
- 18 Hypothesis property-based tests: Bitstream encoding roundtrip, LFSR determinism, neuron output constraints, layer shape invariants, RNG range/shape, recorder accumulation, encoder binary output
- Test count: 887 → 911 tests passing, 98.41% coverage
Issues Closed¶
[3.8.1] - 2026-03-06¶
Enterprise Hardening¶
- 11 CI workflows: ci, v3-engine, v3-wheels, benchmark, docs, pre-commit, codeql, scorecard, stale, release, publish — all SHA-pinned, concurrency-grouped
- Supply chain: Every GitHub Action SHA-pinned (30+ refs),
pypa/gh-action-pypi-publishpinned, dependabot groups GH Actions PRs - Security: Bandit SAST in CI, dependabot security updates enabled, private vulnerability reporting enabled, CodeQL weekly schedule
- Branch protection: 6 required status checks (lint, test×2, spdx-guard, build, pre-commit)
- Dockerfile: Multi-stage build, Python 3.12, non-root user, OCI labels, healthcheck
- Preflight gate:
tools/preflight.py(black + bandit + spdx-guard + pytest),.githooks/pre-pushhook - Release pipeline:
publish.yml(PyPI OIDC trusted publisher, 12 platform wheels),release.ymlattaches sdist to GitHub Releases - Repo hygiene:
.dockerignore,.editorconfig,.gitattributes,CONTRIBUTORS.md,CODEOWNERS, PR template, issue templates (YAML forms), dependabot commit-message prefixes - Labels: 22 labels with colors (ci, security, breaking-change, hdl, performance, needs-review, pinned, roadmap, stale)
- Settings: Delete-branch-on-merge, wiki/projects disabled, OpenSSF Scorecard badge
Lint Enforcement & Python Version¶
- ruff check enforced in CI: 258 unused/deprecated imports auto-fixed across 138 files
- CI test matrix expanded: Python 3.10, 3.11, 3.12 (dropped 3.9 — EOL, autoray/PennyLane incompatible)
requires-pythonbumped to>=3.10: badge, classifiers, black/ruff target-version updated- bandit added to
[dev]extras: contributors can nowmake lintafterpip install -e ".[dev]" - benchmark.yml permissions tightened:
permissions: {}at top, scoped per-job - SECURITY.md / SUPPORT.md: GitHub Security Advisories link added
- VALIDATION.md refreshed: 1058 tests, 98% gate, ruff/bandit/spdx-guard/codeql/scorecard gates documented
[3.8.0] - 2026-03-05¶
Hardening & Documentation¶
- Coverage gate raised to 98%: De-omitted 6 modules (chaos/rng, analysis/explainability, physics/wolfram_hypergraph, robotics/swarm, learning/neuroevolution, spatial/*) plus bio/neuromodulation. 34 new tests, 1058 total, 98.10% coverage
- NumPy 2.x audit: Zero deprecated calls found — codebase fully compatible
- Full API documentation: 25 new mkdocstrings pages, all 44 subpackages wired into nav. Reorganized into Core / Compiler & Export / Domain Modules / Infrastructure sections
- Stale issue automation:
.github/workflows/stale.yml— weekly sweep, 60+14 day lifecycle, exempt: pinned/security/roadmap - CI coverage gate sync:
ci.ymlandpyproject.tomlboth enforcefail_under = 98
[3.7.0] - 2026-02-11¶
Adaptive Runtime Engine -- HDC/VSA, SCPN Petri Nets, Fault-Tolerant Logic¶
- HDC/VSA kernel:
BitStreamTensorgainsxor,xor_inplace,rotate_right,hamming_distance,bundlemethods for hyper-dimensional computing on 10,000-bit vectors - SIMD fused XOR+popcount: AVX-512 VPOPCNTDQ / AVX2 / portable dispatch for hamming distance hot path
- PyBitStreamTensor: New
#[pyclass]exposing full HDC algebra to Python (13 methods) - HDCVector: High-level Python class with operator overloading (
*=bind,+=bundle,.similarity(),.permute()) - PetriNetEngine: Stochastic Colored Petri Net engine wrapping two
DenseLayerinstances for Places->Transitions->Places firing - Fault-tolerant logic: Boolean logic with stochastic redundancy (1024-bit) survives 40%+ bit-flip rates
- 44 new tests: 15 Rust integration + 20 Python HDC + 9 Python Petri Net
- 2 demos: HDC symbolic query ("Capital of France?"), safety-critical Boolean logic with error sweep
- Comprehensive study:
docs/research/SC_NEUROCORE_V3.7_ADAPTIVE_RUNTIME_ENGINE_STUDY.md
[3.6.0] - 2026-02-10¶
Fused Dense Pipeline + Fast PRNG + Batch Forward¶
- Fused encode+AND+popcount:
forward_fused()eliminates intermediate input bitstream materialization - Fast PRNG switch: xoshiro256++ for dense fast-path input encoding and numpy batch encoding
- Batched dense API:
DenseLayer.forward_batch_numpy()processes N samples in one FFI call - New diagnostics: criterion benches for fused dense, encode+popcount, batch dense, and PRNG throughput
- Version/test/docs update: bumped to 3.6.0 with the fused dense pipeline test suite and migration notes
[3.5.0] - 2026-02-10¶
SIMD Pipeline Acceleration¶
- SIMD fused AND+popcount: AVX-512 VPOPCNTDQ accelerated dense inner loop with AVX2 fallback
- SIMD Bernoulli encode: AVX-512BW/AVX2 threshold compare path for packed Bernoulli generation
- Flat weight storage: Contiguous
[neuron][input][word]packed layout for cache-friendly access - Zero-allocation LIF batch: Pre-allocated numpy outputs for batch LIF APIs
- Criterion benchmarks: Added fused-and-popcount and SIMD Bernoulli diagnostics
[3.4.0] - 2026-02-10¶
SIMD Pack, LIF Optimization, Rayon Guard¶
- SIMD pack vectorization: AVX-512/AVX2/portable fast packing (closes 6x Blueprint target)
- Branchless LIF mask: Eliminates branches in fixed-point sign extension
- batch_lif_run_multi(): Parallel multi-neuron batch execution via rayon
- Rayon work threshold: Avoids thread-pool overhead at small input counts
- Criterion benchmarks: Added pack_fast, pack_dispatch, lif_100k_steps
[3.3.0] - 2026-02-10¶
Fast Bernoulli, Fused AND+Popcount, Zero-Copy Prepacked¶
- bernoulli_packed_fast: 8x less RNG bandwidth via byte-threshold encoding
- Fused AND+popcount: Eliminates intermediate buffer allocation in neuron compute
- forward_prepacked_numpy(): True zero-copy from numpy 2D uint64 arrays
- set_num_threads(): Rayon thread pool configuration for tuning parallelism
- Criterion benchmarks: Added bernoulli_packed_fast benchmark
[3.2.0] - 2026-02-10¶
Benchmark CI, Single-Call Dense Forward, Parallel Encoding¶
- Criterion Benchmarks: Expanded suite with bernoulli encoding comparison and dense forward variants
- Benchmark CI: Automated criterion runs with artifact upload
- DenseLayer.forward_numpy(): Single FFI call with numpy input/output plus parallel encoding
- Parallel batch_encode_numpy: Rayon-parallelized probability encoding
- Repo cleanup: Added local
.gitignorefor generated artifacts
[3.1.0] - 2026-02-10¶
Dense Forward Optimization & PyPI Publishing¶
- Direct Packed Bernoulli:
bernoulli_packed()eliminatesVec<u8>intermediate allocations - Parallel Encoding:
DenseLayer.forward_fast()parallelizes input encoding with per-input RNGs - Pre-packed Forward:
DenseLayer.forward_prepacked()accepts pre-encoded numpy/list inputs and skips encoding - batch_encode_numpy: Returns a 2-D numpy array instead of nested Python lists
- PyPI Publishing: Added automated wheel upload on
v3.*tags via Trusted Publisher workflow - Updated Benchmarks: Added dense
fastandprepackedbenchmark variants
[3.0.0] - 2026-02-10¶
Performance Optimization & Stable Release¶
- NumPy Zero-Copy:
pack_bitstream_numpy(),popcount_numpy(),unpack_bitstream_numpy()— eliminate FFI marshalling overhead - Batch Operations:
batch_lif_run(),batch_lif_run_varying(),batch_encode()— process arrays in single FFI calls - Verilator CI: Co-simulation tests run automatically on Ubuntu runners
- Updated Benchmarks: Formal report showing true kernel performance with zero-copy interop
- Bridge Version Fix:
bridge/pyproject.tomlversion now matches engine
Release Candidate (3.0.0-rc.1)¶
- IR Python Bridge: Full PyO3 bindings for ScGraphBuilder, ScGraph, verify, print, parse, emit_sv
- Co-sim Activation: Verilator compilation + simulation when available; graceful skip preserved
- Wheel CI: Cross-platform wheel builds (Linux/macOS/Windows x Python 3.9-3.12)
- Benchmark Report: Formal v2-vs-v3 performance comparison with Blueprint section 8 targets
- IR Demo: Real end-to-end Python->IR->verification->SystemVerilog demo
HDL Compilation Pipeline (3.0.0-beta.1)¶
- SC IR: Rust-native intermediate representation with 11 op types
- SV Emitter: Compile IR graphs to synthesizable SystemVerilog
- Co-sim: Verilator-based verification against Rust golden model
- CI: Expanded test coverage to include all differentiation, acceleration, integration, and HDL Python tests
Integration & Hardening¶
- SSGF-compatible Kuramoto solver (
step_ssgf,run_ssgf) - Property-based testing with proptest (12 property tests)
- Multi-head attention (
forward_multihead) - SC-mode GNN (
forward_sc) - End-to-end training demo
- Comprehensive rustdoc
Differentiation & Acceleration¶
- Surrogate gradient LIF (FastSigmoid, SuperSpike, ArcTan)
- DifferentiableDenseLayer for backpropagation
- Stochastic attention (rate + SC mode)
- Graph neural network layer
- Kuramoto oscillator solver
- Criterion benchmarks + v2/v3 comparison
Foundation¶
- Rust engine with PyO3 bindings
- Bit-exact LFSR, LIF neuron, dense layer
- SIMD dispatch (AVX-512, AVX2, NEON, portable)
- Python bridge with v2-compatible API
- Equivalence test suite
[2.2.0] - 2026-02-09¶
Added¶
- Module Discoverability: Populated 36 stub
__init__.pyfiles with proper__all__exports and lazy imports. Every package now supportsfrom sc_neurocore.X import Ywithout touching internals. - MkDocs API Documentation: Added
mkdocs.ymlwith mkdocstrings plugin,docs/index.md,docs/getting-started.md,docs/architecture.md, and 17 API reference stubs indocs/api/. - Examples Directory: 6 runnable example scripts demonstrating bitstream
encoding, neuron layers, vectorized inference, SCPN stack, HDL generation,
and ensemble consensus (
examples/01–06). - Module Docstrings: Added module-level docstrings to
pipeline/ingestion.py,pipeline/training.py,utils/model_bridge.py,ensembles/orchestrator.py.
Changed¶
- Print → Logging: Converted 60+
print()calls across 25 source modules to structuredloggingwithgetLogger(__name__)and%-style formatting. Dashboard and drivers intentionally excluded (stdout by design). - CI Coverage Threshold: Raised
--cov-fail-underfrom 50 to 97 in.github/workflows/ci.ymlto match actual coverage. - Version bump: 2.1.0 → 2.2.0.
Fixed¶
- Unused Imports: Removed dead imports from 7 files (
bio/uploading.py,core/replication.py,core/immortality.py,export/onnx_exporter.py,dashboard/text_dashboard.py,hdl_gen/verilog_generator.py,viz/web_viz.py). - Input Validation:
VectorizedSCLayer.forward()now raisesValueErroron wrong-shape input instead of silently producing garbage. - File I/O Error Handling:
onnx_exporter.py,immortality.py,verilog_generator.py, andreplication.pynow catchOSErroron file operations and log meaningful messages.
Security¶
- Pickle Allowlist: Replaced wildcard
'numpy.core.numeric': {'*'}with explicit{'_frombuffer', 'scalar'}incore/immortality.py. - Path Traversal Prevention:
core/replication.pynow validates that the destination directory is within or below the working directory viaos.path.realpath()+os.path.relpath().
[2.1.0] - 2026-02-08¶
Fixed (Critical)¶
- HDL Bitstream Encoder Seed Decorrelation: All parallel encoders shared
hardcoded seed
0xACE1, producing correlated bitstreams and breaking SC multiplication (P(x AND x) = P(x)instead ofP(x)*P(w)). Added per-instanceSEED_INITparameter with prime-stride offsets (input:0xACE1 + i*7, weight:0xBEEF + i*13). - HDL Missing Port Connections:
noise_inandv_outwere floating on LIF neuron instances insc_dense_layer_core.v. Connected via wire buses. - HDL Duplicate Port: Removed duplicate
.stream_leninsc_neurocore_top.v. - Fixed-Point Overflow:
FixedPointLIFNeuronnow applies_mask()for proper two's complement overflow wrapping on membrane potential.
Added¶
- GPU Acceleration Backend (
accel/gpu_backend.py):- CuPy/NumPy dual-path with automatic GPU detection and CPU fallback.
gpu_pack_bitstream(),gpu_vec_and(),gpu_popcount(),gpu_vec_mac().VectorizedSCLayerauto-selects GPU when CuPy is available.
- Performance Benchmark Suite (
scripts/benchmark_suite.py):- 14 benchmarks across 5 categories (scalar, packed ops, dense layer, full pipeline, GPU).
--fullmode (10x iterations),--markdownoutput toBENCHMARKS.md.
- CI/CD Pipeline (
.github/workflows/sc-neurocore-ci.yml):- Lint (black + mypy), Test (Python 3.9/3.11/3.12 matrix, coverage >= 60%), Build (wheel + install verification).
- Co-Simulation Harness:
hdl/tb_sc_lif_neuron.v: Verilog testbench reading stimuli.txt, writing results_verilog.txt for bit-exact comparison.scripts/cosim_gen_and_check.py: CLI driver with--generateand--check.
- Bit-True Python Models:
FixedPointLFSR: 16-bit maximal-length LFSR (period 65535).FixedPointBitstreamEncoder: LFSR + unsigned comparator._mask(): Two's complement sign-extension with overflow wrap.
- Public API Surface: Root
__init__.pyexports 28 symbols across 7 subpackages. All subpackage__init__.pyfiles populated. - Tiered Module System: 43 subpackages categorised as
core(7),research(24+), orcontrib(5). Install extras:[gpu],[research],[contrib]. - Behavioural Equivalence Tests: 29 tests covering LFSR, encoder, LIF neuron, full pipeline, and bit-width masking.
- GPU Backend Tests: 17 tests covering all GPU primitives and VectorizedSCLayer integration.
Changed¶
- Version bump: 2.0.0 -> 2.1.0.
pyproject.toml: Added tool configs (pytest, black, mypy), tiered extras.VectorizedSCLayer: Refactored to use GPU backend with CPU fallback.
[2.0.0] - 2026-01-12¶
Added¶
- Sapience & Sentience (v2.2.0):
MetaCognitionLoop: Computational self-awareness and self-modeling.NeuromodulatorSystem: Dopamine/Serotonin emotional state modulation.NeuroArtGenerator: Generative AI for internal state expression.AsimovGovernor: Ethical constraint system (Three Laws).MindDescriptionLanguage (MDL): Substrate-independent soul serialization.DigitalSoul: Persistence and reincarnation protocols.VonNeumannProbe: Code-level self-replication.
- Galactic Scale (v2.1.0):
InterstellarDTN: Long-range delay-tolerant networking.DysonPowerGrid: Stellar-scale energy management.KardashevEstimator: Civilization Type metrics.DarkForestAgent: Game-theoretic survival logic.MPIDriver: Distributed cluster-scale simulation.SNNGeneticEvolver: Automated architecture optimization.
- Transcendent & Omega (v2.0.5):
HeatDeathLayer: Entropy-survival computing.PlanckGrid: Spacetime lattice theoretical limits.HolographicBoundary: 3D-to-2D info mapping (AdS/CFT).EverettTreeLayer: Many-Worlds branching solver.WolframHypergraph: Graph-rewrite universe evolution.CategoryTheoryBridge: Unified mathematical functors.FormalVerifier: SMT-based safety proofs.
- Exotic & Frontiers (v2.0.0):
VectorizedSCLayer: 64-bit packed JIT-accelerated core.QuantumStochasticLayer: VQC qubit rotation bridge.StochasticTransformerBlock: Spike-driven attention.MemristiveDenseLayer: Hardware-aware analog simulation.StochasticCPG: Robotic locomotion oscillators.MyceliumLayer: Fungal network dynamics.BCIDecoder: Neural signal (EEG) interface.DVSInputLayer: Event Camera (AER) processing.EnergyProfiler: 45nm Energy/CO2 estimation.WatermarkInjector: IP protection security backdoors.
Optimized¶
BitstreamAverager: 6x speedup using running sum algorithm.BitstreamEncoder: Added Sobol Sequence (LDS) mode for faster convergence.
Fixed¶
- Fixed f-string syntax in Verilog generator.
- Fixed dimension mismatch in Attention mechanism.
- Addressed Windows encoding issues in documentation generation.
[1.0.0] - 2025-12-03¶
- Initial Release: Stochastic Neurons, Synapses, and Basic Bitstream Utilities.