Skip to content

Hardware

Hardware abstraction layer for chip emulators and deployment targets.

9 hardware chip emulators: Loihi CUBA, Loihi 2, TrueNorth, BrainScaleS AdEx, SpiNNaker, Akida, DPI, MemristorArray, GenericASIC. Each emulates the target chip's neuron dynamics, precision constraints, and routing limitations.

Python
from sc_neurocore.hardware import LoihiCUBANeuron, TrueNorthNeuron

sc_neurocore.hardware

sc_neurocore.hardware — Neuromorphic Hardware Abstraction Layer.

Provides device specifications, resource estimation, constraint checking, neuron-to-core mapping, and deployment packaging for Loihi, SpiNNaker, BrainScaleS, FPGA, and Akida targets.

DeviceFamily

Bases: Enum

Supported neuromorphic hardware families.

Source code in src/sc_neurocore/hardware/device.py
Python
22
23
24
25
26
27
28
29
30
31
32
class DeviceFamily(Enum):
    """Supported neuromorphic hardware families."""

    LOIHI = auto()
    LOIHI2 = auto()
    SPINNAKER = auto()
    SPINNAKER2 = auto()
    BRAINSCALES = auto()
    BRAINSCALES2 = auto()
    FPGA_GENERIC = auto()
    AKIDA = auto()

DeviceSpec dataclass

Physical specification of a neuromorphic device.

Attributes:

Name Type Description
family DeviceFamily

Hardware family identifier.

cores int

Number of neuro-cores on the chip.

neurons_per_core int

Maximum neurons per core.

synapses_per_core int

Maximum synaptic connections per core.

axons_per_core int

Maximum input axons per core.

tick_ns float

Duration of one simulation tick in nanoseconds.

precision_bits int

Weight precision in bits.

supports_learning bool

Whether on-chip learning is supported.

power_per_core_mw float

Estimated power per active core (mW).

max_fan_in int

Maximum fan-in per neuron.

max_fan_out int

Maximum fan-out per neuron.

weight_bits int

Synaptic weight bit-width.

delay_bits int

Synaptic delay bit-width.

max_delay_ticks int

Maximum synaptic delay in ticks.

Source code in src/sc_neurocore/hardware/device.py
Python
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
@dataclass(frozen=True)
class DeviceSpec:
    """Physical specification of a neuromorphic device.

    Attributes:
        family: Hardware family identifier.
        cores: Number of neuro-cores on the chip.
        neurons_per_core: Maximum neurons per core.
        synapses_per_core: Maximum synaptic connections per core.
        axons_per_core: Maximum input axons per core.
        tick_ns: Duration of one simulation tick in nanoseconds.
        precision_bits: Weight precision in bits.
        supports_learning: Whether on-chip learning is supported.
        power_per_core_mw: Estimated power per active core (mW).
        max_fan_in: Maximum fan-in per neuron.
        max_fan_out: Maximum fan-out per neuron.
        weight_bits: Synaptic weight bit-width.
        delay_bits: Synaptic delay bit-width.
        max_delay_ticks: Maximum synaptic delay in ticks.
    """

    family: DeviceFamily
    cores: int
    neurons_per_core: int
    synapses_per_core: int
    axons_per_core: int
    tick_ns: float
    precision_bits: int
    supports_learning: bool
    power_per_core_mw: float
    max_fan_in: int = 256
    max_fan_out: int = 4096
    weight_bits: int = 8
    delay_bits: int = 6
    max_delay_ticks: int = 63

ResourceEstimate dataclass

Hardware resource estimation result.

Attributes:

Name Type Description
cores_needed int

Minimum cores to host the network.

neurons_mapped int

Total neurons to place.

synapses_mapped int

Total synapses to route.

utilization_pct float

Average core utilization (%).

power_mw float

Estimated total power (mW).

latency_us float

Estimated single-tick latency (µs).

fits bool

Whether the network fits on the target device.

Source code in src/sc_neurocore/hardware/resource_estimator.py
Python
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
@dataclass
class ResourceEstimate:
    """Hardware resource estimation result.

    Attributes:
        cores_needed: Minimum cores to host the network.
        neurons_mapped: Total neurons to place.
        synapses_mapped: Total synapses to route.
        utilization_pct: Average core utilization (%).
        power_mw: Estimated total power (mW).
        latency_us: Estimated single-tick latency (µs).
        fits: Whether the network fits on the target device.
    """

    cores_needed: int
    neurons_mapped: int
    synapses_mapped: int
    utilization_pct: float
    power_mw: float
    latency_us: float
    fits: bool

ResourceEstimator

Estimate hardware cost for deploying an SC-NeuroCore network.

Source code in src/sc_neurocore/hardware/resource_estimator.py
Python
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
class ResourceEstimator:
    """Estimate hardware cost for deploying an SC-NeuroCore network."""

    def estimate(
        self,
        adjacency: np.ndarray[Any, Any],
        device: DeviceSpec,
    ) -> ResourceEstimate:
        """Estimate resources from an adjacency matrix.

        Parameters:
            adjacency: (N, N) weighted adjacency matrix.
            device: Target device specification.

        Returns:
            ResourceEstimate with core counts, power, etc.
        """
        n_neurons = adjacency.shape[0]
        n_synapses = int(np.count_nonzero(adjacency))

        # Core count from neuron packing
        cores_from_neurons = math.ceil(n_neurons / device.neurons_per_core)

        # Core count from synapse packing
        cores_from_synapses = (
            math.ceil(n_synapses / device.synapses_per_core) if device.synapses_per_core > 0 else 1
        )

        cores_needed = max(cores_from_neurons, cores_from_synapses, 1)

        # Utilization
        total_neuron_slots = cores_needed * device.neurons_per_core
        utilization = (n_neurons / total_neuron_slots * 100) if total_neuron_slots > 0 else 0.0

        # Power
        power = cores_needed * device.power_per_core_mw

        # Latency: one tick
        latency_us = device.tick_ns / 1000.0

        fits = cores_needed <= device.cores

        return ResourceEstimate(
            cores_needed=cores_needed,
            neurons_mapped=n_neurons,
            synapses_mapped=n_synapses,
            utilization_pct=round(utilization, 2),
            power_mw=round(power, 3),
            latency_us=latency_us,
            fits=fits,
        )

    def fits(
        self,
        adjacency: np.ndarray[Any, Any],
        device: DeviceSpec,
    ) -> bool:
        """Quick check: does the network fit on the device?"""
        return self.estimate(adjacency, device).fits

    def compare(
        self,
        adjacency: np.ndarray[Any, Any],
        devices: list[DeviceSpec],
    ) -> list[ResourceEstimate]:
        """Compare resource requirements across multiple devices."""
        return [self.estimate(adjacency, dev) for dev in devices]

estimate(adjacency, device)

Estimate resources from an adjacency matrix.

Parameters:

Name Type Description Default
adjacency ndarray[Any, Any]

(N, N) weighted adjacency matrix.

required
device DeviceSpec

Target device specification.

required

Returns:

Type Description
ResourceEstimate

ResourceEstimate with core counts, power, etc.

Source code in src/sc_neurocore/hardware/resource_estimator.py
Python
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
def estimate(
    self,
    adjacency: np.ndarray[Any, Any],
    device: DeviceSpec,
) -> ResourceEstimate:
    """Estimate resources from an adjacency matrix.

    Parameters:
        adjacency: (N, N) weighted adjacency matrix.
        device: Target device specification.

    Returns:
        ResourceEstimate with core counts, power, etc.
    """
    n_neurons = adjacency.shape[0]
    n_synapses = int(np.count_nonzero(adjacency))

    # Core count from neuron packing
    cores_from_neurons = math.ceil(n_neurons / device.neurons_per_core)

    # Core count from synapse packing
    cores_from_synapses = (
        math.ceil(n_synapses / device.synapses_per_core) if device.synapses_per_core > 0 else 1
    )

    cores_needed = max(cores_from_neurons, cores_from_synapses, 1)

    # Utilization
    total_neuron_slots = cores_needed * device.neurons_per_core
    utilization = (n_neurons / total_neuron_slots * 100) if total_neuron_slots > 0 else 0.0

    # Power
    power = cores_needed * device.power_per_core_mw

    # Latency: one tick
    latency_us = device.tick_ns / 1000.0

    fits = cores_needed <= device.cores

    return ResourceEstimate(
        cores_needed=cores_needed,
        neurons_mapped=n_neurons,
        synapses_mapped=n_synapses,
        utilization_pct=round(utilization, 2),
        power_mw=round(power, 3),
        latency_us=latency_us,
        fits=fits,
    )

fits(adjacency, device)

Quick check: does the network fit on the device?

Source code in src/sc_neurocore/hardware/resource_estimator.py
Python
101
102
103
104
105
106
107
def fits(
    self,
    adjacency: np.ndarray[Any, Any],
    device: DeviceSpec,
) -> bool:
    """Quick check: does the network fit on the device?"""
    return self.estimate(adjacency, device).fits

compare(adjacency, devices)

Compare resource requirements across multiple devices.

Source code in src/sc_neurocore/hardware/resource_estimator.py
Python
109
110
111
112
113
114
115
def compare(
    self,
    adjacency: np.ndarray[Any, Any],
    devices: list[DeviceSpec],
) -> list[ResourceEstimate]:
    """Compare resource requirements across multiple devices."""
    return [self.estimate(adjacency, dev) for dev in devices]

Violation dataclass

A single hardware constraint violation.

Attributes:

Name Type Description
neuron_id int

Index of the offending neuron.

constraint str

Name of the violated constraint.

value float

Actual value that violates the constraint.

limit float

Maximum allowed value.

message str

Human-readable description.

Source code in src/sc_neurocore/hardware/constraints.py
Python
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
@dataclass
class Violation:
    """A single hardware constraint violation.

    Attributes:
        neuron_id: Index of the offending neuron.
        constraint: Name of the violated constraint.
        value: Actual value that violates the constraint.
        limit: Maximum allowed value.
        message: Human-readable description.
    """

    neuron_id: int
    constraint: str
    value: float
    limit: float
    message: str = ""

HardwareConstraints dataclass

Constraint set for a target device.

Derived from a DeviceSpec, or specified manually.

Source code in src/sc_neurocore/hardware/constraints.py
Python
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
@dataclass
class HardwareConstraints:
    """Constraint set for a target device.

    Derived from a ``DeviceSpec``, or specified manually.
    """

    max_fan_in: int = 256
    max_fan_out: int = 4096
    weight_bits: int = 8
    delay_bits: int = 6
    max_delay_ticks: int = 63

    @classmethod
    def from_device(cls, device: DeviceSpec) -> HardwareConstraints:
        """Derive constraints from a device specification."""
        return cls(
            max_fan_in=device.max_fan_in,
            max_fan_out=device.max_fan_out,
            weight_bits=device.weight_bits,
            delay_bits=device.delay_bits,
            max_delay_ticks=device.max_delay_ticks,
        )

from_device(device) classmethod

Derive constraints from a device specification.

Source code in src/sc_neurocore/hardware/constraints.py
Python
57
58
59
60
61
62
63
64
65
66
@classmethod
def from_device(cls, device: DeviceSpec) -> HardwareConstraints:
    """Derive constraints from a device specification."""
    return cls(
        max_fan_in=device.max_fan_in,
        max_fan_out=device.max_fan_out,
        weight_bits=device.weight_bits,
        delay_bits=device.delay_bits,
        max_delay_ticks=device.max_delay_ticks,
    )

ConstraintChecker

Check and optionally fix hardware constraint violations.

Source code in src/sc_neurocore/hardware/constraints.py
Python
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
class ConstraintChecker:
    """Check and optionally fix hardware constraint violations."""

    def check(
        self,
        adjacency: np.ndarray[Any, Any],
        constraints: HardwareConstraints,
        weights: np.ndarray[Any, Any] | None = None,
        delays: np.ndarray[Any, Any] | None = None,
    ) -> list[Violation]:
        """Check all constraints. Returns list of violations (empty if clean).

        Parameters:
            adjacency: (N, N) connectivity matrix (nonzero = connected).
            constraints: Hardware constraint set.
            weights: Optional (N, N) weight matrix to check precision.
            delays: Optional (N, N) delay matrix in ticks.
        """
        violations: list[Violation] = []
        n = adjacency.shape[0]

        # Fan-in: column sum of binary adjacency
        binary = (adjacency != 0).astype(int)
        fan_in = binary.sum(axis=0)
        for j in range(n):
            if fan_in[j] > constraints.max_fan_in:
                violations.append(
                    Violation(
                        neuron_id=j,
                        constraint="fan_in",
                        value=float(fan_in[j]),
                        limit=float(constraints.max_fan_in),
                        message=f"Neuron {j}: fan-in {fan_in[j]} > {constraints.max_fan_in}",
                    )
                )

        # Fan-out: row sum
        fan_out = binary.sum(axis=1)
        for i in range(n):
            if fan_out[i] > constraints.max_fan_out:
                violations.append(
                    Violation(
                        neuron_id=i,
                        constraint="fan_out",
                        value=float(fan_out[i]),
                        limit=float(constraints.max_fan_out),
                        message=f"Neuron {i}: fan-out {fan_out[i]} > {constraints.max_fan_out}",
                    )
                )

        # Weight precision
        if weights is not None:
            max_abs = np.max(np.abs(weights))
            if max_abs > 0:
                w_max = 2 ** (constraints.weight_bits - 1) - 1
                scale = w_max / max_abs
                quantized = np.round(weights * scale) / scale
                rel_error = np.max(np.abs(weights - quantized)) / max_abs
                if rel_error > 0.1:
                    violations.append(
                        Violation(
                            neuron_id=-1,
                            constraint="weight_precision",
                            value=float(rel_error),
                            limit=0.1,
                            message=f"Weight quantization error {rel_error:.3f} > 10% with {constraints.weight_bits}-bit precision",
                        )
                    )

        # Delay bounds
        if delays is not None:
            max_delay = np.max(delays)
            if max_delay > constraints.max_delay_ticks:
                offenders = np.argwhere(delays > constraints.max_delay_ticks)
                for idx in offenders[:10]:  # report first 10
                    violations.append(
                        Violation(
                            neuron_id=int(idx[0]),
                            constraint="delay",
                            value=float(delays[idx[0], idx[1]]),
                            limit=float(constraints.max_delay_ticks),
                            message=f"Synapse ({idx[0]},{idx[1]}): delay {delays[idx[0], idx[1]]} > {constraints.max_delay_ticks}",
                        )
                    )

        return violations

    def auto_fix(
        self,
        adjacency: np.ndarray[Any, Any],
        constraints: HardwareConstraints,
    ) -> np.ndarray[Any, Any]:
        """Attempt automatic fixes: prune weakest connections to satisfy fan-in/out.

        Returns a modified adjacency matrix.
        """
        adj = adjacency.copy()
        n = adj.shape[0]

        # Fix fan-in violations by pruning weakest incoming connections
        for j in range(n):
            incoming = np.nonzero(adj[:, j])[0]
            if len(incoming) > constraints.max_fan_in:
                strengths = np.abs(adj[incoming, j])
                keep_idx = np.argsort(strengths)[-constraints.max_fan_in :]
                prune_idx = np.setdiff1d(np.arange(len(incoming)), keep_idx)
                for pi in prune_idx:
                    adj[incoming[pi], j] = 0.0

        # Fix fan-out violations by pruning weakest outgoing connections
        for i in range(n):
            outgoing = np.nonzero(adj[i, :])[0]
            if len(outgoing) > constraints.max_fan_out:
                strengths = np.abs(adj[i, outgoing])
                keep_idx = np.argsort(strengths)[-constraints.max_fan_out :]
                prune_idx = np.setdiff1d(np.arange(len(outgoing)), keep_idx)
                for pi in prune_idx:
                    adj[i, outgoing[pi]] = 0.0

        return adj

check(adjacency, constraints, weights=None, delays=None)

Check all constraints. Returns list of violations (empty if clean).

Parameters:

Name Type Description Default
adjacency ndarray[Any, Any]

(N, N) connectivity matrix (nonzero = connected).

required
constraints HardwareConstraints

Hardware constraint set.

required
weights ndarray[Any, Any] | None

Optional (N, N) weight matrix to check precision.

None
delays ndarray[Any, Any] | None

Optional (N, N) delay matrix in ticks.

None
Source code in src/sc_neurocore/hardware/constraints.py
Python
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
def check(
    self,
    adjacency: np.ndarray[Any, Any],
    constraints: HardwareConstraints,
    weights: np.ndarray[Any, Any] | None = None,
    delays: np.ndarray[Any, Any] | None = None,
) -> list[Violation]:
    """Check all constraints. Returns list of violations (empty if clean).

    Parameters:
        adjacency: (N, N) connectivity matrix (nonzero = connected).
        constraints: Hardware constraint set.
        weights: Optional (N, N) weight matrix to check precision.
        delays: Optional (N, N) delay matrix in ticks.
    """
    violations: list[Violation] = []
    n = adjacency.shape[0]

    # Fan-in: column sum of binary adjacency
    binary = (adjacency != 0).astype(int)
    fan_in = binary.sum(axis=0)
    for j in range(n):
        if fan_in[j] > constraints.max_fan_in:
            violations.append(
                Violation(
                    neuron_id=j,
                    constraint="fan_in",
                    value=float(fan_in[j]),
                    limit=float(constraints.max_fan_in),
                    message=f"Neuron {j}: fan-in {fan_in[j]} > {constraints.max_fan_in}",
                )
            )

    # Fan-out: row sum
    fan_out = binary.sum(axis=1)
    for i in range(n):
        if fan_out[i] > constraints.max_fan_out:
            violations.append(
                Violation(
                    neuron_id=i,
                    constraint="fan_out",
                    value=float(fan_out[i]),
                    limit=float(constraints.max_fan_out),
                    message=f"Neuron {i}: fan-out {fan_out[i]} > {constraints.max_fan_out}",
                )
            )

    # Weight precision
    if weights is not None:
        max_abs = np.max(np.abs(weights))
        if max_abs > 0:
            w_max = 2 ** (constraints.weight_bits - 1) - 1
            scale = w_max / max_abs
            quantized = np.round(weights * scale) / scale
            rel_error = np.max(np.abs(weights - quantized)) / max_abs
            if rel_error > 0.1:
                violations.append(
                    Violation(
                        neuron_id=-1,
                        constraint="weight_precision",
                        value=float(rel_error),
                        limit=0.1,
                        message=f"Weight quantization error {rel_error:.3f} > 10% with {constraints.weight_bits}-bit precision",
                    )
                )

    # Delay bounds
    if delays is not None:
        max_delay = np.max(delays)
        if max_delay > constraints.max_delay_ticks:
            offenders = np.argwhere(delays > constraints.max_delay_ticks)
            for idx in offenders[:10]:  # report first 10
                violations.append(
                    Violation(
                        neuron_id=int(idx[0]),
                        constraint="delay",
                        value=float(delays[idx[0], idx[1]]),
                        limit=float(constraints.max_delay_ticks),
                        message=f"Synapse ({idx[0]},{idx[1]}): delay {delays[idx[0], idx[1]]} > {constraints.max_delay_ticks}",
                    )
                )

    return violations

auto_fix(adjacency, constraints)

Attempt automatic fixes: prune weakest connections to satisfy fan-in/out.

Returns a modified adjacency matrix.

Source code in src/sc_neurocore/hardware/constraints.py
Python
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
def auto_fix(
    self,
    adjacency: np.ndarray[Any, Any],
    constraints: HardwareConstraints,
) -> np.ndarray[Any, Any]:
    """Attempt automatic fixes: prune weakest connections to satisfy fan-in/out.

    Returns a modified adjacency matrix.
    """
    adj = adjacency.copy()
    n = adj.shape[0]

    # Fix fan-in violations by pruning weakest incoming connections
    for j in range(n):
        incoming = np.nonzero(adj[:, j])[0]
        if len(incoming) > constraints.max_fan_in:
            strengths = np.abs(adj[incoming, j])
            keep_idx = np.argsort(strengths)[-constraints.max_fan_in :]
            prune_idx = np.setdiff1d(np.arange(len(incoming)), keep_idx)
            for pi in prune_idx:
                adj[incoming[pi], j] = 0.0

    # Fix fan-out violations by pruning weakest outgoing connections
    for i in range(n):
        outgoing = np.nonzero(adj[i, :])[0]
        if len(outgoing) > constraints.max_fan_out:
            strengths = np.abs(adj[i, outgoing])
            keep_idx = np.argsort(strengths)[-constraints.max_fan_out :]
            prune_idx = np.setdiff1d(np.arange(len(outgoing)), keep_idx)
            for pi in prune_idx:
                adj[i, outgoing[pi]] = 0.0

    return adj

NeuronPlacement dataclass

Placement of a single neuron on hardware.

Source code in src/sc_neurocore/hardware/mapping.py
Python
24
25
26
27
28
29
30
@dataclass
class NeuronPlacement:
    """Placement of a single neuron on hardware."""

    neuron_id: int
    core_id: int
    local_id: int  # position within the core

Mapper

Map neurons to cores using different strategies.

Source code in src/sc_neurocore/hardware/mapping.py
Python
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
class Mapper:
    """Map neurons to cores using different strategies."""

    def map_greedy(
        self,
        adjacency: np.ndarray[Any, Any],
        device: DeviceSpec,
    ) -> list[NeuronPlacement]:
        """Greedy sequential mapping: fill cores one by one.

        Simple but fast. Good baseline.
        """
        n = adjacency.shape[0]
        npc = device.neurons_per_core
        placements = []

        for i in range(n):
            core = i // npc
            local = i % npc
            placements.append(NeuronPlacement(neuron_id=i, core_id=core, local_id=local))

        return placements

    def map_balanced(
        self,
        adjacency: np.ndarray[Any, Any],
        device: DeviceSpec,
    ) -> list[NeuronPlacement]:
        """Balanced mapping: distribute neurons evenly across cores.

        Neurons are assigned round-robin to minimize load imbalance.
        """
        import math

        n = adjacency.shape[0]
        npc = device.neurons_per_core
        n_cores = math.ceil(n / npc)
        n_cores = min(n_cores, device.cores)

        placements = []
        core_counts = [0] * n_cores

        for i in range(n):
            core = i % n_cores
            placements.append(
                NeuronPlacement(
                    neuron_id=i,
                    core_id=core,
                    local_id=core_counts[core],
                )
            )
            core_counts[core] += 1

        return placements

    def map_locality(
        self,
        adjacency: np.ndarray[Any, Any],
        device: DeviceSpec,
    ) -> list[NeuronPlacement]:
        """Locality-aware mapping: cluster connected neurons on same core.

        Uses a simple greedy clustering: start from the most connected
        neuron, pack its neighbors into the same core until full.
        """
        import math

        n = adjacency.shape[0]
        npc = device.neurons_per_core
        n_cores = math.ceil(n / npc)
        n_cores = min(n_cores, device.cores)

        placed = set()
        placements_dict: dict[int, NeuronPlacement] = {}

        # Degree-ordered seed selection
        degree = np.abs(adjacency).sum(axis=1) + np.abs(adjacency).sum(axis=0)
        order = np.argsort(-degree)  # highest degree first

        current_core = 0
        current_local = 0

        for seed in order:
            if seed in placed:
                continue

            # Start new core with seed
            if current_local >= npc:
                current_core += 1
                current_local = 0
                if current_core >= n_cores:
                    break

            placements_dict[seed] = NeuronPlacement(
                neuron_id=int(seed), core_id=current_core, local_id=current_local
            )
            placed.add(int(seed))
            current_local += 1

            # Pack neighbors of seed into same core
            neighbors = np.nonzero(adjacency[seed])[0]
            neighbor_strength = np.abs(adjacency[seed, neighbors])
            sorted_neighbors = neighbors[np.argsort(-neighbor_strength)]

            for nb in sorted_neighbors:
                nb = int(nb)
                if nb in placed or current_local >= npc:
                    continue
                placements_dict[nb] = NeuronPlacement(
                    neuron_id=nb, core_id=current_core, local_id=current_local
                )
                placed.add(nb)
                current_local += 1

        # Handle any remaining unplaced neurons
        for i in range(n):
            if i not in placed:
                if current_local >= npc:
                    current_core += 1
                    current_local = 0
                placements_dict[i] = NeuronPlacement(
                    neuron_id=i, core_id=current_core, local_id=current_local
                )
                current_local += 1

        return [placements_dict[i] for i in range(n)]

map_greedy(adjacency, device)

Greedy sequential mapping: fill cores one by one.

Simple but fast. Good baseline.

Source code in src/sc_neurocore/hardware/mapping.py
Python
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
def map_greedy(
    self,
    adjacency: np.ndarray[Any, Any],
    device: DeviceSpec,
) -> list[NeuronPlacement]:
    """Greedy sequential mapping: fill cores one by one.

    Simple but fast. Good baseline.
    """
    n = adjacency.shape[0]
    npc = device.neurons_per_core
    placements = []

    for i in range(n):
        core = i // npc
        local = i % npc
        placements.append(NeuronPlacement(neuron_id=i, core_id=core, local_id=local))

    return placements

map_balanced(adjacency, device)

Balanced mapping: distribute neurons evenly across cores.

Neurons are assigned round-robin to minimize load imbalance.

Source code in src/sc_neurocore/hardware/mapping.py
Python
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
def map_balanced(
    self,
    adjacency: np.ndarray[Any, Any],
    device: DeviceSpec,
) -> list[NeuronPlacement]:
    """Balanced mapping: distribute neurons evenly across cores.

    Neurons are assigned round-robin to minimize load imbalance.
    """
    import math

    n = adjacency.shape[0]
    npc = device.neurons_per_core
    n_cores = math.ceil(n / npc)
    n_cores = min(n_cores, device.cores)

    placements = []
    core_counts = [0] * n_cores

    for i in range(n):
        core = i % n_cores
        placements.append(
            NeuronPlacement(
                neuron_id=i,
                core_id=core,
                local_id=core_counts[core],
            )
        )
        core_counts[core] += 1

    return placements

map_locality(adjacency, device)

Locality-aware mapping: cluster connected neurons on same core.

Uses a simple greedy clustering: start from the most connected neuron, pack its neighbors into the same core until full.

Source code in src/sc_neurocore/hardware/mapping.py
Python
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
def map_locality(
    self,
    adjacency: np.ndarray[Any, Any],
    device: DeviceSpec,
) -> list[NeuronPlacement]:
    """Locality-aware mapping: cluster connected neurons on same core.

    Uses a simple greedy clustering: start from the most connected
    neuron, pack its neighbors into the same core until full.
    """
    import math

    n = adjacency.shape[0]
    npc = device.neurons_per_core
    n_cores = math.ceil(n / npc)
    n_cores = min(n_cores, device.cores)

    placed = set()
    placements_dict: dict[int, NeuronPlacement] = {}

    # Degree-ordered seed selection
    degree = np.abs(adjacency).sum(axis=1) + np.abs(adjacency).sum(axis=0)
    order = np.argsort(-degree)  # highest degree first

    current_core = 0
    current_local = 0

    for seed in order:
        if seed in placed:
            continue

        # Start new core with seed
        if current_local >= npc:
            current_core += 1
            current_local = 0
            if current_core >= n_cores:
                break

        placements_dict[seed] = NeuronPlacement(
            neuron_id=int(seed), core_id=current_core, local_id=current_local
        )
        placed.add(int(seed))
        current_local += 1

        # Pack neighbors of seed into same core
        neighbors = np.nonzero(adjacency[seed])[0]
        neighbor_strength = np.abs(adjacency[seed, neighbors])
        sorted_neighbors = neighbors[np.argsort(-neighbor_strength)]

        for nb in sorted_neighbors:
            nb = int(nb)
            if nb in placed or current_local >= npc:
                continue
            placements_dict[nb] = NeuronPlacement(
                neuron_id=nb, core_id=current_core, local_id=current_local
            )
            placed.add(nb)
            current_local += 1

    # Handle any remaining unplaced neurons
    for i in range(n):
        if i not in placed:
            if current_local >= npc:
                current_core += 1
                current_local = 0
            placements_dict[i] = NeuronPlacement(
                neuron_id=i, core_id=current_core, local_id=current_local
            )
            current_local += 1

    return [placements_dict[i] for i in range(n)]

DeploymentPackage dataclass

Self-contained deployment artifact for neuromorphic hardware.

Attributes:

Name Type Description
device DeviceSpec

Target device specification.

placements list[NeuronPlacement]

Neuron-to-core mapping.

config_blob bytes

Binary configuration data for the target.

metadata dict[str, Any]

Additional deployment metadata.

Source code in src/sc_neurocore/hardware/deployment.py
Python
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
@dataclass
class DeploymentPackage:
    """Self-contained deployment artifact for neuromorphic hardware.

    Attributes:
        device: Target device specification.
        placements: Neuron-to-core mapping.
        config_blob: Binary configuration data for the target.
        metadata: Additional deployment metadata.
    """

    device: DeviceSpec
    placements: list[NeuronPlacement]
    config_blob: bytes
    metadata: dict[str, Any] = field(default_factory=dict)

Deployer

Create and validate deployment packages.

Source code in src/sc_neurocore/hardware/deployment.py
Python
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
class Deployer:
    """Create and validate deployment packages."""

    def package(
        self,
        adjacency: np.ndarray[Any, Any],
        device: DeviceSpec,
        placements: list[NeuronPlacement],
        weights: np.ndarray[Any, Any] | None = None,
    ) -> DeploymentPackage:
        """Create a deployment package.

        Parameters:
            adjacency: (N, N) network connectivity matrix.
            device: Target device.
            placements: Neuron-to-core mapping.
            weights: Optional weight matrix (defaults to adjacency values).

        Returns:
            DeploymentPackage ready for deployment.
        """
        w = weights if weights is not None else adjacency
        config = self._build_config(w, placements, device)
        n_neurons = adjacency.shape[0]
        n_synapses = int(np.count_nonzero(adjacency))
        n_cores = max(p.core_id for p in placements) + 1

        metadata = {
            "n_neurons": n_neurons,
            "n_synapses": n_synapses,
            "n_cores_used": n_cores,
            "device_family": device.family.name,
            "weight_bits": device.weight_bits,
            "fits": n_cores <= device.cores,
        }

        return DeploymentPackage(
            device=device,
            placements=placements,
            config_blob=config,
            metadata=metadata,
        )

    def validate(self, package: DeploymentPackage) -> bool:
        """Validate a deployment package for consistency.

        Checks:
        - All neuron IDs are unique
        - No core_id exceeds device capacity
        - Config blob is non-empty
        - All local IDs are within neurons_per_core
        """
        if not package.config_blob:
            return False

        neuron_ids = [p.neuron_id for p in package.placements]
        if len(set(neuron_ids)) != len(neuron_ids):
            return False  # duplicate neuron placement

        for p in package.placements:
            if p.core_id >= package.device.cores:
                return False
            if p.local_id >= package.device.neurons_per_core:
                return False

        return True

    def summary(self, package: DeploymentPackage) -> str:
        """Human-readable deployment summary."""
        m = package.metadata
        lines = [
            "=== Deployment Summary ===",
            f"Device:      {m.get('device_family', 'unknown')}",
            f"Neurons:     {m.get('n_neurons', 0)}",
            f"Synapses:    {m.get('n_synapses', 0)}",
            f"Cores used:  {m.get('n_cores_used', 0)} / {package.device.cores}",
            f"Fits:        {'Yes' if m.get('fits') else 'No'}",
            f"Config size: {len(package.config_blob)} bytes",
            f"Weight bits: {m.get('weight_bits', 0)}",
        ]
        return "\n".join(lines)

    def _build_config(
        self,
        weights: np.ndarray[Any, Any],
        placements: list[NeuronPlacement],
        device: DeviceSpec,
    ) -> bytes:
        """Build binary configuration blob.

        Format: header + per-core neuron count + per-synapse (src, tgt, weight).
        """
        n = weights.shape[0]
        # Quantize weights to target precision
        w_max = 2 ** (device.weight_bits - 1) - 1
        abs_max = np.max(np.abs(weights))
        if abs_max > 0:
            scale = w_max / abs_max
        else:
            scale = 1.0

        # Header: magic + n_neurons + n_synapses
        buf = bytearray()
        buf.extend(struct.pack(">4sII", b"SCNC", n, int(np.count_nonzero(weights))))

        # Placement table
        for p in placements:
            buf.extend(struct.pack(">IHH", p.neuron_id, p.core_id, p.local_id))

        # Synapse table
        rows, cols = np.nonzero(weights)
        for r, c in zip(rows, cols):
            qw = int(np.round(weights[r, c] * scale))
            qw = max(-w_max, min(w_max, qw))
            buf.extend(struct.pack(">IIh", int(r), int(c), qw))

        return bytes(buf)

package(adjacency, device, placements, weights=None)

Create a deployment package.

Parameters:

Name Type Description Default
adjacency ndarray[Any, Any]

(N, N) network connectivity matrix.

required
device DeviceSpec

Target device.

required
placements list[NeuronPlacement]

Neuron-to-core mapping.

required
weights ndarray[Any, Any] | None

Optional weight matrix (defaults to adjacency values).

None

Returns:

Type Description
DeploymentPackage

DeploymentPackage ready for deployment.

Source code in src/sc_neurocore/hardware/deployment.py
Python
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
def package(
    self,
    adjacency: np.ndarray[Any, Any],
    device: DeviceSpec,
    placements: list[NeuronPlacement],
    weights: np.ndarray[Any, Any] | None = None,
) -> DeploymentPackage:
    """Create a deployment package.

    Parameters:
        adjacency: (N, N) network connectivity matrix.
        device: Target device.
        placements: Neuron-to-core mapping.
        weights: Optional weight matrix (defaults to adjacency values).

    Returns:
        DeploymentPackage ready for deployment.
    """
    w = weights if weights is not None else adjacency
    config = self._build_config(w, placements, device)
    n_neurons = adjacency.shape[0]
    n_synapses = int(np.count_nonzero(adjacency))
    n_cores = max(p.core_id for p in placements) + 1

    metadata = {
        "n_neurons": n_neurons,
        "n_synapses": n_synapses,
        "n_cores_used": n_cores,
        "device_family": device.family.name,
        "weight_bits": device.weight_bits,
        "fits": n_cores <= device.cores,
    }

    return DeploymentPackage(
        device=device,
        placements=placements,
        config_blob=config,
        metadata=metadata,
    )

validate(package)

Validate a deployment package for consistency.

Checks: - All neuron IDs are unique - No core_id exceeds device capacity - Config blob is non-empty - All local IDs are within neurons_per_core

Source code in src/sc_neurocore/hardware/deployment.py
Python
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
def validate(self, package: DeploymentPackage) -> bool:
    """Validate a deployment package for consistency.

    Checks:
    - All neuron IDs are unique
    - No core_id exceeds device capacity
    - Config blob is non-empty
    - All local IDs are within neurons_per_core
    """
    if not package.config_blob:
        return False

    neuron_ids = [p.neuron_id for p in package.placements]
    if len(set(neuron_ids)) != len(neuron_ids):
        return False  # duplicate neuron placement

    for p in package.placements:
        if p.core_id >= package.device.cores:
            return False
        if p.local_id >= package.device.neurons_per_core:
            return False

    return True

summary(package)

Human-readable deployment summary.

Source code in src/sc_neurocore/hardware/deployment.py
Python
111
112
113
114
115
116
117
118
119
120
121
122
123
124
def summary(self, package: DeploymentPackage) -> str:
    """Human-readable deployment summary."""
    m = package.metadata
    lines = [
        "=== Deployment Summary ===",
        f"Device:      {m.get('device_family', 'unknown')}",
        f"Neurons:     {m.get('n_neurons', 0)}",
        f"Synapses:    {m.get('n_synapses', 0)}",
        f"Cores used:  {m.get('n_cores_used', 0)} / {package.device.cores}",
        f"Fits:        {'Yes' if m.get('fits') else 'No'}",
        f"Config size: {len(package.config_blob)} bytes",
        f"Weight bits: {m.get('weight_bits', 0)}",
    ]
    return "\n".join(lines)

get_device(family)

Look up a device specification by family name or enum.

Source code in src/sc_neurocore/hardware/device.py
Python
214
215
216
217
218
219
220
221
def get_device(family: DeviceFamily | str) -> DeviceSpec:
    """Look up a device specification by family name or enum."""
    if isinstance(family, str):
        family = DeviceFamily[family.upper()]
    spec = DEVICE_CATALOG.get(family)
    if spec is None:
        raise ValueError(f"Unknown device family: {family}")
    return spec