02 — Neuro-Symbolic Compiler: Petri Nets → Stochastic Neurons¶

SCPN Fusion Core includes a neuro-symbolic compiler that converts Petri net control logic into stochastic LIF (leaky integrate-and-fire) neuron networks. This tutorial walks through the full pipeline:

  1. Define a Petri net
  2. Compile it to a stochastic neural network
  3. Run inference (dense float-path)
  4. Export/import artifacts for deployment

License: © 1998–2026 Miroslav Šotek. GNU AGPL v3.

Open In Colab Binder


In [ ]:
import numpy as np
import matplotlib.pyplot as plt
from scpn_fusion.scpn import StochasticPetriNet, FusionCompiler

Step 1: Define a Plasma Control Petri Net¶

We model a simplified tokamak position controller with:

  • 4 input places (sensor observations: R_high, R_low, Z_high, Z_low)
  • 4 transitions (decision logic)
  • 4 output places (actuator commands: PF_up, PF_down, PF_in, PF_out)
In [ ]:
net = StochasticPetriNet()

# Input places (sensor observations)
net.add_place("R_high", initial_tokens=0.0)
net.add_place("R_low",  initial_tokens=0.0)
net.add_place("Z_high", initial_tokens=0.0)
net.add_place("Z_low",  initial_tokens=0.0)

# Output places (actuator commands)
net.add_place("PF_up",   initial_tokens=0.0)
net.add_place("PF_down", initial_tokens=0.0)
net.add_place("PF_in",   initial_tokens=0.0)
net.add_place("PF_out",  initial_tokens=0.0)

# Transitions (control logic)
net.add_transition("T_correct_R_high", threshold=0.5)
net.add_transition("T_correct_R_low",  threshold=0.5)
net.add_transition("T_correct_Z_high", threshold=0.5)
net.add_transition("T_correct_Z_low",  threshold=0.5)

# Arcs: if R is too high → move plasma inward
net.add_arc("R_high", "T_correct_R_high", weight=1.0)
net.add_arc("T_correct_R_high", "PF_in", weight=1.0)

# If R is too low → move plasma outward
net.add_arc("R_low", "T_correct_R_low", weight=1.0)
net.add_arc("T_correct_R_low", "PF_out", weight=1.0)

# If Z is too high → push plasma down
net.add_arc("Z_high", "T_correct_Z_high", weight=1.0)
net.add_arc("T_correct_Z_high", "PF_down", weight=1.0)

# If Z is too low → push plasma up
net.add_arc("Z_low", "T_correct_Z_low", weight=1.0)
net.add_arc("T_correct_Z_low", "PF_up", weight=1.0)

net.compile()
print(net.summary())

Step 2: Compile to Stochastic Neural Network¶

The compiler maps each transition to a stochastic LIF neuron. If sc-neurocore is installed, it uses hardware-accurate bitstream encoding. Otherwise, it falls back to NumPy float computation.

In [ ]:
compiler = FusionCompiler(bitstream_length=1024, seed=42)
compiled = compiler.compile(net)

print(f"Places:      {compiled.n_places}")
print(f"Transitions: {compiled.n_transitions}")
print(f"Stochastic:  {compiled.has_stochastic_path}")
print(f"Firing mode: {compiled.firing_mode}")
print()
print(compiled.summary())

Step 3: Run Inference¶

We simulate a scenario where the plasma is displaced to R_high and Z_low. The compiled network should activate PF_in (radial correction) and PF_up (vertical correction).

In [ ]:
# Inject observation: plasma displaced R_high + Z_low
marking = np.zeros(compiled.n_places)
marking[0] = 0.8  # R_high active
marking[3] = 0.9  # Z_low active

W_in = compiled.W_in.toarray()
W_out = compiled.W_out.toarray()

print("Initial marking:", dict(zip(net.place_names, marking)))

# Step: compute transition firing
currents = W_in @ marking
fired = (currents >= compiled.thresholds).astype(float)
consumed = W_in.T @ fired
produced = W_out @ fired
new_marking = np.clip(marking - consumed + produced, 0.0, 1.0)

print("\nFired transitions:", dict(zip(net.transition_names, fired)))
print("\nNew marking:", dict(zip(net.place_names, new_marking)))
print("\n→ PF_in activated:", new_marking[6] > 0)   # PF_in
print("→ PF_up activated:", new_marking[4] > 0)   # PF_up

Step 4: Multi-Step Evolution¶

Run the network for 30 steps with a time-varying disturbance signal.

In [ ]:
n_steps = 30
history = np.zeros((n_steps + 1, compiled.n_places))
marking = np.zeros(compiled.n_places)
history[0] = marking

for k in range(n_steps):
    # Time-varying disturbance
    t = k / n_steps
    marking[0] = 0.6 * np.sin(2 * np.pi * t) ** 2   # R_high oscillation
    marking[3] = 0.5 * np.cos(2 * np.pi * t) ** 2   # Z_low oscillation
    
    currents = W_in @ marking
    fired = (currents >= compiled.thresholds).astype(float)
    consumed = W_in.T @ fired
    produced = W_out @ fired
    marking = np.clip(marking - consumed + produced, 0.0, 1.0)
    history[k + 1] = marking

fig, axes = plt.subplots(2, 1, figsize=(10, 6), sharex=True)
for i in range(4):
    axes[0].plot(history[:, i], label=net.place_names[i])
axes[0].set_ylabel("Input Places")
axes[0].legend(loc="upper right")
axes[0].set_title("Petri Net Token Evolution (30 steps)")

for i in range(4, 8):
    axes[1].plot(history[:, i], label=net.place_names[i])
axes[1].set_ylabel("Output Places")
axes[1].set_xlabel("Step")
axes[1].legend(loc="upper right")
plt.tight_layout()
plt.show()

Step 5: Artifact Export / Import¶

The compiled network can be serialised as a JSON artifact for deployment on embedded hardware or real-time controllers.

In [ ]:
import tempfile, os
from scpn_fusion.scpn import load_artifact, save_artifact

# Export
artifact = compiled.export_artifact(
    name="position_controller_v1",
    dt_control_s=0.001,
    readout_config={
        "action_specs": [
            {"place_idx": 4, "label": "PF_up"},
            {"place_idx": 5, "label": "PF_down"},
            {"place_idx": 6, "label": "PF_in"},
            {"place_idx": 7, "label": "PF_out"},
        ],
        "gains": [1000.0, 1000.0, 500.0, 500.0],
        "abs_max": [5000.0, 5000.0, 3000.0, 3000.0],
        "slew_per_s": [1e5, 1e5, 5e4, 5e4],
    },
    injection_config=[
        {"place_idx": 0, "label": "R_high"},
        {"place_idx": 1, "label": "R_low"},
        {"place_idx": 2, "label": "Z_high"},
        {"place_idx": 3, "label": "Z_low"},
    ],
)

fd, path = tempfile.mkstemp(suffix=".scpnctl.json")
os.close(fd)
save_artifact(artifact, path)
print(f"Saved artifact to: {path}")
print(f"File size: {os.path.getsize(path)} bytes")

# Reload
loaded = load_artifact(path)
print(f"\nReloaded: {loaded.meta['name']}")
print(f"Places: {loaded.nP}, Transitions: {loaded.nT}")
os.unlink(path)

Performance Benchmarks¶

Timing the key computations in this notebook:

  1. Petri net compilation (FusionCompiler.compile)
  2. Single inference step (matrix multiply + threshold)
  3. 30-step evolution loop (multi-step token propagation)
  4. Artifact export/import round-trip
In [ ]:
import timeit

# 1. Petri net compilation
def bench_compile():
    c = FusionCompiler(bitstream_length=1024, seed=42)
    c.compile(net)

t_compile = timeit.repeat(bench_compile, number=10, repeat=5)
print(f"FusionCompiler.compile (10 calls):")
print(f"  Mean: {np.mean(t_compile)*1000:.1f} ms +/- {np.std(t_compile)*1000:.1f} ms")
print(f"  Per call: {np.mean(t_compile)/10*1000:.2f} ms")

# 2. Single inference step (matrix multiply + threshold)
W_in_dense = compiled.W_in.toarray()
W_out_dense = compiled.W_out.toarray()
test_marking = np.zeros(compiled.n_places)
test_marking[0] = 0.8
test_marking[3] = 0.9

def bench_single_step():
    currents = W_in_dense @ test_marking
    fired = (currents >= compiled.thresholds).astype(float)
    consumed = W_in_dense.T @ fired
    produced = W_out_dense @ fired
    np.clip(test_marking - consumed + produced, 0.0, 1.0)

t_step = timeit.repeat(bench_single_step, number=10000, repeat=5)
print(f"\nSingle inference step (10000 calls):")
print(f"  Mean: {np.mean(t_step)*1000:.1f} ms +/- {np.std(t_step)*1000:.1f} ms")
print(f"  Per call: {np.mean(t_step)/10000*1e6:.2f} us")

# 3. 30-step evolution loop
def bench_evolution():
    m = np.zeros(compiled.n_places)
    for k in range(30):
        t = k / 30
        m[0] = 0.6 * np.sin(2 * np.pi * t) ** 2
        m[3] = 0.5 * np.cos(2 * np.pi * t) ** 2
        currents = W_in_dense @ m
        fired = (currents >= compiled.thresholds).astype(float)
        consumed = W_in_dense.T @ fired
        produced = W_out_dense @ fired
        m = np.clip(m - consumed + produced, 0.0, 1.0)

t_evo = timeit.repeat(bench_evolution, number=100, repeat=5)
print(f"\n30-step evolution (100 runs):")
print(f"  Mean: {np.mean(t_evo)*1000:.1f} ms +/- {np.std(t_evo)*1000:.1f} ms")
print(f"  Per run: {np.mean(t_evo)/100*1000:.2f} ms")

# 4. Artifact export round-trip
import tempfile, os

def bench_export_import():
    art = compiled.export_artifact(
        name="bench_test", dt_control_s=0.001,
        readout_config={"action_specs": [], "gains": [], "abs_max": [], "slew_per_s": []},
        injection_config=[],
    )
    fd, p = tempfile.mkstemp(suffix=".scpnctl.json")
    os.close(fd)
    save_artifact(art, p)
    load_artifact(p)
    os.unlink(p)

t_io = timeit.repeat(bench_export_import, number=10, repeat=3)
print(f"\nArtifact export/import round-trip (10 calls):")
print(f"  Mean: {np.mean(t_io)*1000:.1f} ms +/- {np.std(t_io)*1000:.1f} ms")
print(f"  Per call: {np.mean(t_io)/10*1000:.2f} ms")

Summary¶

The neuro-symbolic compiler pipeline:

  1. Define plasma control logic as a Stochastic Petri Net
  2. Compile to stochastic LIF neurons (with optional SC-NeuroCore)
  3. Run inference: inject observations → fire transitions → read actuator commands
  4. Export as JSON artifact for deployment

This architecture enables sub-millisecond real-time plasma control with formally verifiable logic.

Next: See 03_flight_simulator.ipynb for integration with the tokamak flight simulator.