Skip to content

Quantum Annealing Bridge

Module: sc_neurocore.bridges.quantum_annealing Source: src/sc_neurocore/bridges/quantum_annealing.py — 1883 LOC Status (v3.14.0): 24 public exports across 18 classes + 4 exporters + 1 enum + 1 helper; 198-test bridges suite passes; Rust accelerator path declared via sc_neurocore_engine (see §6 honesty notice — engine wheel not installed in this measurement environment, so the Rust speedup numbers are NOT measured here). __tier__ = "research". The dimod and dwave-ocean-sdk deps are soft-imported (graceful fallback).

This page covers the third of three speculative hardware bridges. Sister pages: - DNA strand displacement: api/bridges/dna_mapper.md - Photonic NoC: api/bridges/photonic_noc.md


1. What this bridge does

Compiles an SC neural network's adjacency matrix into Ising or QUBO form for D-Wave annealers and classical simulated-annealing solvers:

Text Only
SC Network adjacency  →  SCToIsing / SCToQUBO  →  IsingModel / QUBOModel
       (NxN)                   ↓                          ↓
                          Compiler              SimulatedAnnealer  ── Rust path ──
                                                          ↓                  ↓
                                                  best_spins +         Python fallback
                                                  best_energy
                                                          ↓
                                                  DWaveInterface (optional QPU)
                                                          ↓
                                                  Sample distribution

Six analysis / utility classes wrap the core path: EmbeddingAnalyzer (D-Wave Pegasus topology fit), ChainBreakResolver (post-processing), AnnealingSchedule (custom annealing curves), GaugeTransform (gauge averaging for ICE mitigation), ProblemDecomposer (large-problem partitioning), TTSAnalyzer (time-to-solution scaling).


2. Public surface

24 symbols re-exported from sc_neurocore.bridges.__init__:

Group Symbols
Enums + dataclasses ProblemType, QubitSpec, CouplerSpec, IsingModel, QUBOModel
Compilers SCToIsing, SCToQUBO, SCBitstreamQUBO, SCPrecisionEncoder
Solvers / interfaces SimulatedAnnealer, DWaveInterface
Analysis EnergyLandscape, EmbeddingAnalyzer, TTSAnalyzer, SampleAggregator
Hardware-graph utilities HardwareGraph, ChainBreakResolver, AnnealingSchedule, GaugeTransform, ProblemDecomposer
Exporters export_ising_json, export_qubo_json, export_bqm, visualize_ising

Module-level constants:

Constant Value Note
_DEFAULT_CHAIN_STRENGTH 2.0 for D-Wave embedding
_DEFAULT_NUM_READS 1000 per QPU call
_DEFAULT_ANNEALING_TIME_US 20.0 μs
_BOLTZMANN_K 1.380649e-23 J/K, physical

3. Compilers: SCToIsing / SCToQUBO

Both compilers accept an N×N adjacency matrix and produce a model with N qubits + couplings derived from non-zero off-diagonal weights.

Python
ising_model: IsingModel = SCToIsing().compile(adjacency)
qubo_model:  QUBOModel  = SCToQUBO().compile(adjacency)
  • IsingModel.h: dict[int, float] — bias per qubit
  • IsingModel.J: dict[(int, int), float] — coupling per edge
  • QUBOModel.Q: dict[(int, int), float] — full upper-triangular matrix (diagonal = bias, off-diagonal = coupling)

The two are mathematically equivalent (s_i = 2*x_i - 1, x_i ∈ {0,1}, s_i ∈ {-1,+1}); the representation choice depends on the downstream solver.

3.1 SCBitstreamQUBO — three task-specific encodings

Python
class SCBitstreamQUBO:
    def __init__(self, penalty: float = 5.0): ...

    def weight_optimization(target_output, candidate_weights, n_bits=8) -> QUBOModel: ...
    def pruning(adjacency, importance_scores, max_connections) -> QUBOModel: ...

A specialised QUBO compiler that targets two SC optimisation patterns common in research:

Weight optimisation

Find binary vector x ∈ {0, 1}ⁿ minimising ||target − W @ x||².

The QUBO formulation expands the squared error:

Text Only
   ||y − Wx||² = xᵀ(WᵀW)x − 2yᵀWx + yᵀy
- Off-diagonal Q[i,j] = (WᵀW)[i,j] + (WᵀW)[j,i] (full upper-triangular) - Diagonal Q[i,i] = (WᵀW)[i,i] − 2(Wᵀy)[i] - Constant offset = yᵀy (so the model's true energy is xᵀQx + offset)

n = min(WᵀW.shape[0], n_bits) so callers can bound the qubit count even when the candidate matrix is wider than the budget. Returned QUBOModel.source = "sc_weight_optimization".

Pruning

Select max_connections edges from the existing connectivity that maximise the sum of importance scores while honouring the cardinality constraint exactly:

Text Only
   maximise   Σ importance[i,j] · x[edge(i,j)]
   subject to Σ x = max_connections

The encoder creates one binary variable per non-zero off-diagonal edge of the adjacency, applies penalty · (Σx − K)² to enforce the constraint, and returns a QUBO whose ground state is the chosen edge subset.

Note: the cardinality penalty is the standard QUBO trick — it adds penalty · (1 − 2K) to every diagonal and 2 · penalty to every off-diagonal pair. With the default penalty = 5.0, callers should rescale if the importance-score magnitudes are very different from unity.

3.2 SCPrecisionEncoder — three encodings of [0, 1] values

Python
class SCPrecisionEncoder:
    def __init__(self, encoding: str = "binary", n_bits: int = 8): ...

    def encode(sc_value: float) -> dict[int, int]: ...
    def decode(qubits: dict[int, int]) -> float: ...
    def encode_array(values: np.ndarray) -> dict[int, int]: ...

    @property
    def n_levels(self) -> int: ...
    def qubits_needed(n_sc_values: int) -> int: ...

Maps continuous SC probabilities in [0, 1] to fixed-length qubit configurations. Three encodings, each with different qubit-vs-precision trade-offs:

Encoding Qubits per value Levels Good for
binary n_bits 2^n_bits dense precision (8 bits → 256 levels)
unary (thermometer) n_bits n_bits + 1 robust to single-bit errors
one_hot n_bits n_bits categorical, no inter-bit coupling

encode(v) clamps v to [0, 1], scales to the encoding's level count, and returns a {qubit_idx: 0|1} dict for one value. encode_array(values) packs an N-element array into a single global dict by offsetting qubit indices by idx * n_bits. decode(qubits) reverses the mapping per encoding (binary positional sum, unary count of 1s, one-hot index of the 1-bit).

Round-trip accuracy: - binary: |encode(v) − decode(...)| ≤ 1 / (2^n_bits − 1) (e.g. ≤ 1/255 ≈ 0.004 at n_bits=8) - unary: ≤ 1 / n_bits - one_hot: ≤ 1 / (n_bits − 1)

n_levels exposes the level count; qubits_needed(n_sc_values) returns n_sc_values * n_bits so callers can size IsingModel / QUBOModel correctly before encoding.

Construction with an unknown encoding string raises ValueError.

3.3 When to use which compiler

Problem Use
"I have an SC network adjacency, give me an Ising model for D-Wave." SCToIsing(adjacency)
"Same, but I want the QUBO form." SCToQUBO(adjacency)
"I want to find binary weights that match a target output." SCBitstreamQUBO.weight_optimization(...)
"I want to prune to exactly K edges by importance." SCBitstreamQUBO.pruning(adj, importance, K)
"I have continuous values; help me encode them into qubits." SCPrecisionEncoder(encoding=..., n_bits=...).encode_array(...)

The four classes do not chain by default — each produces its own IsingModel or QUBOModel (or a per-value qubit dict for the encoder). Combining them (e.g. encode-then-prune) requires caller-side qubit-index bookkeeping.


4. Solvers

4.1 SimulatedAnnealer

Python
class SimulatedAnnealer:
    def __init__(
        self,
        n_sweeps: int = 1000,
        beta_start: float = 0.1,
        beta_end: float = 10.0,
        seed: int = 42,
    ): ...

    def solve_ising(self, model: IsingModel, num_reads: int = 10) -> dict: ...

Single-spin Metropolis sweeps with geometric beta schedule from beta_start to beta_end over n_sweeps. Returns dict with: - best_spins: np.ndarray[int8] of length n_qubits - best_energy: float - energies: list[float] per num_reads - samples: np.ndarray[num_reads, n_qubits]

Per-instance seed → reproducible runs. Same seed → identical output (confirmed by reading source — uses np.random.default_rng).

4.2 Rust acceleration path (declared, not measured here)

SimulatedAnnealer.solve_ising (line 467) branches on _HAS_RUST_QA and model.n_qubits > 10:

Python
if _HAS_RUST_QA and model.n_qubits > 10:
    return self._solve_ising_rust(model, num_reads)
return self._solve_ising_python(model, num_reads)

The Rust path uses 6 PyO3 bindings exported by sc_neurocore_engine: - py_qa_ising_energy — single-state energy - py_qa_simulated_annealing — full SA loop - py_qa_batch_ising_energy — vectorised batch energy - py_qa_gauge_transform — gauge transform for ICE mitigation - py_qa_generate_gauges — random gauge generator - py_qa_greedy_partitionProblemDecomposer accelerator

The class docstring claims "100×+ speedup for models with >20 qubits" — this number is from the source comment, not measured in this environment. The sc_neurocore_engine wheel is not installed on this workstation, so the Rust path falls back to Python every time. To verify the claim:

Bash
cd bridge
maturin develop --release          # provides the local Rust bridge
PYTHONPATH=src pytest tests/test_bridges/test_quantum_annealing.py::test_rust_parity

Tracked as task #49.

4.3 DWaveInterface

Python
class DWaveInterface:
    def __init__(self, solver: str = "Advantage_system6.4"): ...

    def submit(
        self,
        ising: IsingModel,
        num_reads: int = 1000,
        annealing_time_us: float = 20.0,
        chain_strength: float = 2.0,
    ) -> dict: ...

Soft-imports dwave-ocean-sdk at runtime. Raises ImportError("dwave-ocean-sdk required for DWave QPU access") if absent. The interface wraps EmbeddingComposite(DWaveSampler()) and returns the sample-set as a dict (energies, occurrences, chain-break fraction).

The wheel + an active D-Wave Leap account are required to exercise this path. Not measured here.


5. Analysis classes

Class Role Cited basis
EnergyLandscape exhaustive enumeration of small problems (≤16 qubits) classical
EmbeddingAnalyzer embed a logical problem into D-Wave Pegasus topology Choi 2008 minor-embedding
TTSAnalyzer time-to-solution scaling per Rønnow et al. 2014 Science 345:420-424
SampleAggregator de-duplicate samples by spin pattern + summary statistics classical
HardwareGraph model D-Wave Pegasus or Chimera graph topology D-Wave hardware spec
ChainBreakResolver resolve broken chains via majority vote / energy minimisation D-Wave practice
AnnealingSchedule non-monotonic annealing curves (e.g. pause-and-quench) Marshall et al. 2017
GaugeTransform gauge averaging to mitigate intrinsic control errors (ICE) Pelofske et al. 2020
ProblemDecomposer partition large problems into hardware-fitting chunks Booth et al. 2017 (qbsolv)

All 9 classes are pure-Python with the exception of GaugeTransform and ProblemDecomposer, which delegate to the Rust engine when available (_rust_gauge, _rust_gen_gauges, _rust_partition).


6. Rust speedup — measured (closes task #49)

Three classes use Rust acceleration when the engine is installed: 1. SimulatedAnnealer (line 467) — _solve_ising_rust via py_qa_simulated_annealing 2. GaugeTransform (line 1180) — py_qa_gauge_transform / py_qa_generate_gauges 3. ProblemDecomposer (line 1588) — py_qa_greedy_partition

The bridge's _HAS_RUST_QA flag resolves through sc_neurocore_engine.__init__ re-exports — top-level re-exports were added so from sc_neurocore_engine import py_qa_* works. Engine wheel must be present in the active venv; install with:

Bash
cd bridge && python -m maturin develop --release
# or, for an installed wheel:
pip install target/wheels/sc_neurocore_engine-*.whl

6.1 Measured speedup (this workstation, 2026-04-17)

SimulatedAnnealer(n_sweeps=200, seed=42) solving Erdős–Rényi Ising at p=0.1 with num_reads=5. Hardware: Intel i5-11600K, NumPy 2.2.0 (Python 3.12 venv-rocm with sc_neurocore_engine release wheel installed).

Reproducible via the committed benchmark:

Bash
python benchmarks/bench_quantum_annealing_rust_vs_python.py \
    --json benchmarks/results/bench_qa_rust_vs_python.json

The benchmark runs each (backend, N) cell 5 times and reports median + min wall-clock. Median is the typical-run figure; min estimates underlying compute cost when system noise dominates (Rust at small N is sub-millisecond and noisy).

N qubits Python median (min) Rust median (min) Speedup (median)
20 62.9 ms (59.4 ms) 1.41 ms (0.88 ms) 45×
50 456.9 ms (436.9 ms) 8.70 ms (3.59 ms) 53×
100 3070 ms (2831 ms) 8.10 ms (7.70 ms) 379×

Run-to-run variance: a separate run gave 12×/128×/600×, another gave 136×/183×/283× — at N=100 the absolute speedup ranges ~280×–600×, dominated by Python-side scheduling jitter on the ~3 s pure-Python solve. The committed JSON (benchmarks/results/bench_qa_rust_vs_python.json) records one representative run with median-of-5; readers should re-run on their own hardware before quoting numbers.

The docstring's "100×+" is supported at N ≥ 50. At N=20 the median speedup (45×) is below the docstring claim because the Python work per inner loop is small enough that Rust dispatch + PyO3 marshaling overhead becomes a non-trivial fraction of total wall time. The docstring should be relaxed to "50×+ for N ≥ 50, ~40× at N=20" — tracked as follow-up.

Why these numbers differ from earlier drafts of this section: the previous table (5.8× / 761× / 1593×) was measured before the _solve_ising_python sign-error bug fix in commit d7a4d322. The buggy Metropolis short-circuited downhill moves (no rng() calls on accepted flips), so Python was artificially fast and Rust speedup looked artificially large at N ≥ 50. The numbers above are the post-fix, real-Metropolis baseline.

The dispatch threshold model.n_qubits > 10 (line 467) is appropriate for this hardware: even at N=20 Rust is ~45× faster than Python after the fix. Lowering the threshold further (e.g. to N > 4) is unlikely to change real workloads — the small-N case is already <1 ms in either backend.


7. Performance — pure-Python path (this workstation)

Random Erdős–Rényi adjacency at p=0.1, undirected, single compile + 5-read SA with 100 sweeps:

N density SCToQUBO.compile SCToIsing.compile SA solve (5 reads × 100 sweeps)
10 0.100 0.48 ms 0.10 ms 10.19 ms
50 0.100 1.22 ms 0.92 ms 278.81 ms
100 0.100 3.29 ms 2.36 ms 2 205.23 ms

Compile cost is roughly linear in n_edges. The SA solve cost is super-linear (~4× per 2× N) — confirming the spin-by-spin Python loop is the bottleneck and motivating the Rust path. At N=100 a single solve already takes 2 seconds; N=1000 with default sweeps would take ~3 minutes per read, ~50 minutes for the default 1000 reads.

Hardware: Intel i5-11600K, NumPy 2.2.6, no Rust wheel. With the Rust wheel installed and assuming the docstring claim, the N=100 case should drop to ~22 ms (100× speedup) — unverified.


8. Pipeline wiring

Surface How it's wired Verifier
from sc_neurocore.bridges.quantum_annealing import SCToIsing, ... bridges/__init__.py re-exports all 24 symbols tests/test_bridges/test_quantum_annealing.py
SCToQUBO.compileSCToIsing.compile independent compilers; both accept N×N matrix dedicated tests for each
SimulatedAnnealer.solve_ising Rust dispatch _HAS_RUST_QA and model.n_qubits > 10 branch covered when engine wheel present; falls through to Python otherwise
DWaveInterface soft-imports dwave.system lazily tests skip when wheel absent
export_bqm requires dimod; raises if absent dimod-skip path tested

9. Tests

Bash
PYTHONPATH=src python3 -m pytest tests/test_bridges/test_quantum_annealing.py -q
# (part of the 198-test bridges suite — verified 2026-04-17)

tests/test_bridges/test_quantum_annealing.py is 652 lines covering: dataclass round-trip, SCToQUBO/SCToIsing compilation on small matrices, SimulatedAnnealer solver determinism with fixed seed, EnergyLandscape exhaustive enumeration on ≤4 qubits (matches Python brute-force), EmbeddingAnalyzer chain length estimation, ChainBreakResolver majority-vote correctness, GaugeTransform round-trip, TTSAnalyzer scaling estimate, SampleAggregator de-duplication.

What is NOT covered: - Rust speedup verification (engine wheel absent — task #49) - Real D-Wave QPU submission (requires Leap account) - Large-problem decomposition (ProblemDecomposer tested only on N≤30; partition correctness at N=10⁴ would need stress test) - DWaveInterface happy-path (skip-if-no-dwave)


10. Audit (7-point checklist)

# Dimension Status Detail
1 Pipeline wiring ✅ PASS All 24 symbols re-exported and tested
2 Multi-angle tests ✅ PASS 652-line dedicated test file in 198-test bridges suite
3 Rust path ✅ PASS All 6 PyO3 bindings re-exported via bridge/sc_neurocore_engine/__init__.py; _HAS_RUST_QA = True when wheel installed; SimulatedAnnealer.solve_ising dispatches via the n_qubits > 10 branch
4 Benchmarks ✅ PASS §6.1 Rust vs Python comparison: 5.8× (N=20), 761× (N=50), 1593× (N=100). §7 retains pure-Python numbers for reference
5 Performance docs ✅ PASS §7 with explicit "pure-Python only" caveat
6 Documentation page ✅ PASS This page
7 Rules followed ✅ PASS SPDX header ✅. Soft-imports for dimod, dwave-ocean-sdk, sc_neurocore_engine all guarded. British English in this doc; source uses standard scientific-Python identifiers (acceptable per docs-vs-code rule).

Net: 0 WARN, 0 FAIL. Both former WARNs closed by task #49 — engine wheel built via bridge/maturin develop --release, re-exports added to sc_neurocore_engine.__init__, Rust speedup measured (§6.1).


11. Known issues

11.1 Rust speedup (CLOSED by task #49)

§6.1 reports the measured comparison: 5.8× / 761× / 1593× at N=20/50/100. The docstring's "100×+" claim is conservative for N ≥ 50. The engine wheel must be installed in the active venv — see §6 for build instructions.

11.2 SCBitstreamQUBO and SCPrecisionEncoder (DOCUMENTED by task #50)

Both classes now have dedicated subsections under §3: - §3.1 covers SCBitstreamQUBO.weight_optimization and .pruning with the QUBO derivation, cardinality penalty pattern, and source field outputs. - §3.2 covers SCPrecisionEncoder with the three encodings (binary / unary / one_hot), per-encoding qubit-vs-level trade-off table, and round-trip accuracy bounds. - §3.3 is a "when to use which compiler" table covering all four compilers in this bridge (SCToIsing, SCToQUBO, SCBitstreamQUBO, SCPrecisionEncoder).

11.3 No D-Wave hardware-parity test

SCToIsing → SimulatedAnnealer is tested. SCToIsing → DWaveInterface → QPU → samples is not (no Leap account in CI). Adding a parity test against neal.SimulatedAnnealingSampler (D-Wave's reference SA) would validate the compiler output without needing real hardware. Tracked as task #51.

11.4 EmbeddingAnalyzer assumes Pegasus topology

EmbeddingAnalyzer.__init__(topology="pegasus", size=16) defaults to D-Wave Advantage's Pegasus graph. Older Chimera (D-Wave 2000Q) and the new Zephyr (Advantage2) need explicit topology selection. Document the topology options in the class docstring.

11.5 Rust dispatch threshold is hard-coded

SimulatedAnnealer.solve_ising only dispatches to Rust when model.n_qubits > 10 (line 467). The threshold is a magic number; expose as __init__ parameter or class constant.


12. References

Quantum annealing theory:

  • Kadowaki T., Nishimori H. "Quantum annealing in the transverse Ising model." Phys Rev E 58:5355-5363 (1998). The original QA proposal.
  • Farhi E. et al. "Quantum computation by adiabatic evolution." arXiv:quant-ph/0001106 (2000). Adiabatic quantum computation formalism.

D-Wave hardware + minor-embedding:

  • Choi V. "Minor-embedding in adiabatic quantum computation: I. The parameter setting problem." Quantum Inf Process 7:193-209 (2008). Minor-embedding theory for EmbeddingAnalyzer.
  • Boothby K. et al. "Next-Generation Topology of D-Wave Quantum Processors." arXiv:2003.00133 (2020). Pegasus topology used in HardwareGraph.

Solvers + analysis:

  • Rønnow T. F. et al. "Defining and detecting quantum speedup." Science 345:420-424 (2014). TTS methodology used by TTSAnalyzer.
  • Marshall J. et al. "Power of pausing: Advancing understanding of thermalization in experimental quantum annealers." Phys Rev Applied 11:044083 (2019). Inspiration for AnnealingSchedule pause-and-quench.
  • Pelofske E. et al. "Decomposition Algorithms for Solving NP-hard Problems on a Quantum Annealer." J Signal Process Syst 93:405-420 (2021). ProblemDecomposer ancestor.
  • Booth M. et al. "Partitioning Optimization Problems for Hybrid Classical/Quantum Execution." D-Wave Technical Report (2017). qbsolv methodology.

Internal:


13. Auto-rendered API

sc_neurocore.bridges.quantum_annealing

Quantum annealing bridge for SC bitstream networks.

Compiles SC neural networks into Ising/QUBO representations suitable for D-Wave quantum annealers and classical simulated annealing solvers.

Architecture

::

Text Only
SC Network  →  QUBO Compiler  →  Ising/QUBO Model  →  D-Wave / SA Solver
     ↓               ↓                 ↓                      ↓
Populations    Gate→Coupling      Energy landscape       Ground state
Projections    Weight→Field       Partition function     Optimal config

Module Structure

  • Data classes: QubitSpec, CouplerSpec, IsingModel, QUBOModel
  • Compilers: SCToIsing, SCToQUBO
  • Solvers: SimulatedAnnealer, DWaveInterface
  • Analysis: EnergyLandscape, EmbeddingAnalyzer
  • Export: export_bqm, export_qubo_json, export_ising_json

Dependencies

  • numpy — required
  • dwave-ocean-sdk — optional, soft-imported for D-Wave QPU access
  • dimod — optional, soft-imported for BQM interop

ProblemType

Bases: Enum

Quantum optimization problem type.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
113
114
115
116
117
class ProblemType(Enum):
    """Quantum optimization problem type."""

    ISING = "ising"
    QUBO = "qubo"

QubitSpec dataclass

Specification for a single logical qubit.

Attributes

index : int Logical qubit index. label : str Human-readable label (e.g. neuron name). bias : float Local field / linear bias (h_i in Ising, Q_ii in QUBO).

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
@dataclass
class QubitSpec:
    """Specification for a single logical qubit.

    Attributes
    ----------
    index : int
        Logical qubit index.
    label : str
        Human-readable label (e.g. neuron name).
    bias : float
        Local field / linear bias (h_i in Ising, Q_ii in QUBO).
    """

    index: int
    label: str
    bias: float = 0.0

CouplerSpec dataclass

Specification for a qubit-qubit coupling.

Attributes

qubit_a : int First qubit index. qubit_b : int Second qubit index. strength : float Coupling strength (J_ij in Ising, Q_ij in QUBO).

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
@dataclass
class CouplerSpec:
    """Specification for a qubit-qubit coupling.

    Attributes
    ----------
    qubit_a : int
        First qubit index.
    qubit_b : int
        Second qubit index.
    strength : float
        Coupling strength (J_ij in Ising, Q_ij in QUBO).
    """

    qubit_a: int
    qubit_b: int
    strength: float = 0.0

IsingModel dataclass

Ising spin-glass model: H = Σ h_i·s_i + Σ J_ij·s_i·s_j.

Attributes

h : dict[int, float] Linear biases (local fields). Key = qubit index. J : dict[tuple[int, int], float] Quadratic couplings. Key = (i, j) pair, i < j. offset : float Constant energy offset. qubit_labels : dict[int, str] Index → label mapping. n_qubits : int Total logical qubits. source : str Origin description.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
@dataclass
class IsingModel:
    """Ising spin-glass model: H = Σ h_i·s_i + Σ J_ij·s_i·s_j.

    Attributes
    ----------
    h : dict[int, float]
        Linear biases (local fields). Key = qubit index.
    J : dict[tuple[int, int], float]
        Quadratic couplings. Key = (i, j) pair, i < j.
    offset : float
        Constant energy offset.
    qubit_labels : dict[int, str]
        Index → label mapping.
    n_qubits : int
        Total logical qubits.
    source : str
        Origin description.
    """

    h: Dict[int, float] = field(default_factory=dict)
    J: Dict[tuple[int, int], float] = field(default_factory=dict)
    offset: float = 0.0
    qubit_labels: Dict[int, str] = field(default_factory=dict)
    n_qubits: int = 0
    source: str = ""

    def energy(self, spins: Dict[int, int]) -> float:
        """Compute Ising energy for a spin configuration.

        Delegates to Rust engine when available for large models.

        Parameters
        ----------
        spins : dict[int, int]
            Spin values (+1 or -1) per qubit index.
        """
        if _HAS_RUST_QA and self.n_qubits > 20:
            h_indices = list(self.h.keys())
            h_values = [self.h[i] for i in h_indices]
            j_i = [k[0] for k in self.J]
            j_j = [k[1] for k in self.J]
            j_values = list(self.J.values())
            spin_arr = [spins.get(i, 1) for i in range(self.n_qubits)]
            return _rust_ising_energy(
                h_indices,
                h_values,
                j_i,
                j_j,
                j_values,
                spin_arr,
                self.offset,
            )
        e = self.offset
        for i, hi in self.h.items():
            e += hi * spins.get(i, 1)
        for (i, j), jij in self.J.items():
            e += jij * spins.get(i, 1) * spins.get(j, 1)
        return e

energy(spins)

Compute Ising energy for a spin configuration.

Delegates to Rust engine when available for large models.

Parameters

spins : dict[int, int] Spin values (+1 or -1) per qubit index.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
def energy(self, spins: Dict[int, int]) -> float:
    """Compute Ising energy for a spin configuration.

    Delegates to Rust engine when available for large models.

    Parameters
    ----------
    spins : dict[int, int]
        Spin values (+1 or -1) per qubit index.
    """
    if _HAS_RUST_QA and self.n_qubits > 20:
        h_indices = list(self.h.keys())
        h_values = [self.h[i] for i in h_indices]
        j_i = [k[0] for k in self.J]
        j_j = [k[1] for k in self.J]
        j_values = list(self.J.values())
        spin_arr = [spins.get(i, 1) for i in range(self.n_qubits)]
        return _rust_ising_energy(
            h_indices,
            h_values,
            j_i,
            j_j,
            j_values,
            spin_arr,
            self.offset,
        )
    e = self.offset
    for i, hi in self.h.items():
        e += hi * spins.get(i, 1)
    for (i, j), jij in self.J.items():
        e += jij * spins.get(i, 1) * spins.get(j, 1)
    return e

QUBOModel dataclass

QUBO model: min x^T Q x.

Attributes

Q : dict[tuple[int, int], float] QUBO matrix entries. Diagonal = linear, off-diagonal = quadratic. offset : float Constant energy offset. qubit_labels : dict[int, str] Index → label mapping. n_qubits : int Total logical qubits. source : str Origin description.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
@dataclass
class QUBOModel:
    """QUBO model: min x^T Q x.

    Attributes
    ----------
    Q : dict[tuple[int, int], float]
        QUBO matrix entries. Diagonal = linear, off-diagonal = quadratic.
    offset : float
        Constant energy offset.
    qubit_labels : dict[int, str]
        Index → label mapping.
    n_qubits : int
        Total logical qubits.
    source : str
        Origin description.
    """

    Q: Dict[tuple[int, int], float] = field(default_factory=dict)
    offset: float = 0.0
    qubit_labels: Dict[int, str] = field(default_factory=dict)
    n_qubits: int = 0
    source: str = ""

    def energy(self, bits: Dict[int, int]) -> float:
        """Compute QUBO energy for a binary configuration.

        Parameters
        ----------
        bits : dict[int, int]
            Binary values (0 or 1) per qubit index.
        """
        e = self.offset
        for (i, j), qij in self.Q.items():
            e += qij * bits.get(i, 0) * bits.get(j, 0)
        return e

    def to_ising(self) -> IsingModel:
        """Convert QUBO to Ising model.

        Uses the standard transformation: x_i = (s_i + 1) / 2.
        """
        h: Dict[int, float] = {}
        j_couplings: Dict[tuple[int, int], float] = {}
        offset = self.offset

        for (i, j), qij in self.Q.items():
            if i == j:
                h[i] = h.get(i, 0.0) + qij / 2.0
                offset += qij / 4.0
            else:
                a, b = min(i, j), max(i, j)
                j_couplings[(a, b)] = j_couplings.get((a, b), 0.0) + qij / 4.0
                h[i] = h.get(i, 0.0) + qij / 4.0
                h[j] = h.get(j, 0.0) + qij / 4.0
                offset += qij / 4.0

        return IsingModel(
            h=h,
            J=j_couplings,
            offset=offset,
            qubit_labels=dict(self.qubit_labels),
            n_qubits=self.n_qubits,
            source=f"{self.source} (QUBO→Ising)",
        )

energy(bits)

Compute QUBO energy for a binary configuration.

Parameters

bits : dict[int, int] Binary values (0 or 1) per qubit index.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
243
244
245
246
247
248
249
250
251
252
253
254
def energy(self, bits: Dict[int, int]) -> float:
    """Compute QUBO energy for a binary configuration.

    Parameters
    ----------
    bits : dict[int, int]
        Binary values (0 or 1) per qubit index.
    """
    e = self.offset
    for (i, j), qij in self.Q.items():
        e += qij * bits.get(i, 0) * bits.get(j, 0)
    return e

to_ising()

Convert QUBO to Ising model.

Uses the standard transformation: x_i = (s_i + 1) / 2.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
def to_ising(self) -> IsingModel:
    """Convert QUBO to Ising model.

    Uses the standard transformation: x_i = (s_i + 1) / 2.
    """
    h: Dict[int, float] = {}
    j_couplings: Dict[tuple[int, int], float] = {}
    offset = self.offset

    for (i, j), qij in self.Q.items():
        if i == j:
            h[i] = h.get(i, 0.0) + qij / 2.0
            offset += qij / 4.0
        else:
            a, b = min(i, j), max(i, j)
            j_couplings[(a, b)] = j_couplings.get((a, b), 0.0) + qij / 4.0
            h[i] = h.get(i, 0.0) + qij / 4.0
            h[j] = h.get(j, 0.0) + qij / 4.0
            offset += qij / 4.0

    return IsingModel(
        h=h,
        J=j_couplings,
        offset=offset,
        qubit_labels=dict(self.qubit_labels),
        n_qubits=self.n_qubits,
        source=f"{self.source} (QUBO→Ising)",
    )

SCToIsing

Compile SC network adjacency matrices into Ising models.

Maps SC populations to qubits and projections to couplings. Excitatory connections → ferromagnetic (J < 0, favoring alignment). Inhibitory connections → antiferromagnetic (J > 0, favoring anti-alignment).

Parameters

coupling_scale : float Multiplier applied to connection weights (default 1.0). field_scale : float Multiplier for external field from bias (default 0.1).

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
class SCToIsing:
    """Compile SC network adjacency matrices into Ising models.

    Maps SC populations to qubits and projections to couplings.
    Excitatory connections → ferromagnetic (J < 0, favoring alignment).
    Inhibitory connections → antiferromagnetic (J > 0, favoring anti-alignment).

    Parameters
    ----------
    coupling_scale : float
        Multiplier applied to connection weights (default 1.0).
    field_scale : float
        Multiplier for external field from bias (default 0.1).
    """

    def __init__(
        self,
        coupling_scale: float = 1.0,
        field_scale: float = 0.1,
    ) -> None:
        self._coupling_scale = coupling_scale
        self._field_scale = field_scale

    def compile(
        self,
        adjacency: np.ndarray[Any, Any],
        node_labels: list[str] | None = None,
        biases: np.ndarray[Any, Any] | None = None,
        name: str = "sc_ising",
    ) -> IsingModel:
        """Compile adjacency matrix into an Ising model.

        Parameters
        ----------
        adjacency : np.ndarray
            N×N weight matrix. Positive = excitatory, negative = inhibitory.
        node_labels : list[str] | None
            Labels for each node (default: n0, n1, ...).
        biases : np.ndarray | None
            1D array of per-node biases (default: zeros).
        name : str
            Model name.

        Returns
        -------
        IsingModel
        """
        n = adjacency.shape[0]
        labels = node_labels or [f"n{i}" for i in range(n)]
        bias_arr = biases if biases is not None else np.zeros(n)

        h: Dict[int, float] = {}
        j_couplings: Dict[tuple[int, int], float] = {}
        qubit_labels: Dict[int, str] = {}

        for i in range(n):
            qubit_labels[i] = labels[i]
            h[i] = float(bias_arr[i]) * self._field_scale

        for i in range(n):
            for j in range(i + 1, n):
                w = float(adjacency[i, j] + adjacency[j, i]) / 2.0
                if abs(w) > 1e-12:
                    # Excitatory (w > 0) → J < 0 (ferromagnetic)
                    j_couplings[(i, j)] = -w * self._coupling_scale

        return IsingModel(
            h=h,
            J=j_couplings,
            offset=0.0,
            qubit_labels=qubit_labels,
            n_qubits=n,
            source=name,
        )

compile(adjacency, node_labels=None, biases=None, name='sc_ising')

Compile adjacency matrix into an Ising model.

Parameters

adjacency : np.ndarray N×N weight matrix. Positive = excitatory, negative = inhibitory. node_labels : list[str] | None Labels for each node (default: n0, n1, ...). biases : np.ndarray | None 1D array of per-node biases (default: zeros). name : str Model name.

Returns

IsingModel

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
def compile(
    self,
    adjacency: np.ndarray[Any, Any],
    node_labels: list[str] | None = None,
    biases: np.ndarray[Any, Any] | None = None,
    name: str = "sc_ising",
) -> IsingModel:
    """Compile adjacency matrix into an Ising model.

    Parameters
    ----------
    adjacency : np.ndarray
        N×N weight matrix. Positive = excitatory, negative = inhibitory.
    node_labels : list[str] | None
        Labels for each node (default: n0, n1, ...).
    biases : np.ndarray | None
        1D array of per-node biases (default: zeros).
    name : str
        Model name.

    Returns
    -------
    IsingModel
    """
    n = adjacency.shape[0]
    labels = node_labels or [f"n{i}" for i in range(n)]
    bias_arr = biases if biases is not None else np.zeros(n)

    h: Dict[int, float] = {}
    j_couplings: Dict[tuple[int, int], float] = {}
    qubit_labels: Dict[int, str] = {}

    for i in range(n):
        qubit_labels[i] = labels[i]
        h[i] = float(bias_arr[i]) * self._field_scale

    for i in range(n):
        for j in range(i + 1, n):
            w = float(adjacency[i, j] + adjacency[j, i]) / 2.0
            if abs(w) > 1e-12:
                # Excitatory (w > 0) → J < 0 (ferromagnetic)
                j_couplings[(i, j)] = -w * self._coupling_scale

    return IsingModel(
        h=h,
        J=j_couplings,
        offset=0.0,
        qubit_labels=qubit_labels,
        n_qubits=n,
        source=name,
    )

SCToQUBO

Compile SC network into QUBO formulation.

Parameters

penalty : float Constraint penalty coefficient (default 2.0).

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
class SCToQUBO:
    """Compile SC network into QUBO formulation.

    Parameters
    ----------
    penalty : float
        Constraint penalty coefficient (default 2.0).
    """

    def __init__(self, penalty: float = 2.0) -> None:
        self._penalty = penalty

    def compile(
        self,
        adjacency: np.ndarray[Any, Any],
        node_labels: list[str] | None = None,
        name: str = "sc_qubo",
    ) -> QUBOModel:
        """Compile adjacency matrix into a QUBO model.

        Parameters
        ----------
        adjacency : np.ndarray
            N×N weight matrix.
        node_labels : list[str] | None
            Labels for each node.
        name : str
            Model name.

        Returns
        -------
        QUBOModel
        """
        n = adjacency.shape[0]
        labels = node_labels or [f"n{i}" for i in range(n)]
        q_matrix: Dict[tuple[int, int], float] = {}
        qubit_labels: Dict[int, str] = {}

        for i in range(n):
            qubit_labels[i] = labels[i]

        for i in range(n):
            for j in range(i, n):
                if i == j:
                    # Diagonal: self-bias (sum of incoming weights)
                    q_matrix[(i, i)] = -float(np.sum(np.abs(adjacency[:, i])))
                else:
                    w = float(adjacency[i, j] + adjacency[j, i]) / 2.0
                    if abs(w) > 1e-12:
                        q_matrix[(i, j)] = w * self._penalty

        return QUBOModel(
            Q=q_matrix,
            offset=0.0,
            qubit_labels=qubit_labels,
            n_qubits=n,
            source=name,
        )

compile(adjacency, node_labels=None, name='sc_qubo')

Compile adjacency matrix into a QUBO model.

Parameters

adjacency : np.ndarray N×N weight matrix. node_labels : list[str] | None Labels for each node. name : str Model name.

Returns

QUBOModel

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
def compile(
    self,
    adjacency: np.ndarray[Any, Any],
    node_labels: list[str] | None = None,
    name: str = "sc_qubo",
) -> QUBOModel:
    """Compile adjacency matrix into a QUBO model.

    Parameters
    ----------
    adjacency : np.ndarray
        N×N weight matrix.
    node_labels : list[str] | None
        Labels for each node.
    name : str
        Model name.

    Returns
    -------
    QUBOModel
    """
    n = adjacency.shape[0]
    labels = node_labels or [f"n{i}" for i in range(n)]
    q_matrix: Dict[tuple[int, int], float] = {}
    qubit_labels: Dict[int, str] = {}

    for i in range(n):
        qubit_labels[i] = labels[i]

    for i in range(n):
        for j in range(i, n):
            if i == j:
                # Diagonal: self-bias (sum of incoming weights)
                q_matrix[(i, i)] = -float(np.sum(np.abs(adjacency[:, i])))
            else:
                w = float(adjacency[i, j] + adjacency[j, i]) / 2.0
                if abs(w) > 1e-12:
                    q_matrix[(i, j)] = w * self._penalty

    return QUBOModel(
        Q=q_matrix,
        offset=0.0,
        qubit_labels=qubit_labels,
        n_qubits=n,
        source=name,
    )

SimulatedAnnealer

Classical simulated annealing solver for Ising/QUBO models.

Implements the Metropolis-Hastings algorithm with exponential temperature schedule.

Parameters

n_sweeps : int Number of Monte Carlo sweeps (default 1000). beta_start : float Initial inverse temperature (default 0.1). beta_end : float Final inverse temperature (default 10.0). seed : int Random seed.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
class SimulatedAnnealer:
    """Classical simulated annealing solver for Ising/QUBO models.

    Implements the Metropolis-Hastings algorithm with exponential
    temperature schedule.

    Parameters
    ----------
    n_sweeps : int
        Number of Monte Carlo sweeps (default 1000).
    beta_start : float
        Initial inverse temperature (default 0.1).
    beta_end : float
        Final inverse temperature (default 10.0).
    seed : int
        Random seed.
    """

    def __init__(
        self,
        n_sweeps: int = 1000,
        beta_start: float = 0.1,
        beta_end: float = 10.0,
        seed: int = 42,
    ) -> None:
        self._n_sweeps = n_sweeps
        self._beta_start = beta_start
        self._beta_end = beta_end
        self._rng = np.random.default_rng(seed)

    def solve_ising(
        self,
        model: IsingModel,
        num_reads: int = 10,
    ) -> Dict[str, Any]:
        """Solve an Ising model via simulated annealing.

        Delegates to Rust engine when available (100×+ speedup
        for models with >20 qubits).

        Parameters
        ----------
        model : IsingModel
            The Ising model to solve.
        num_reads : int
            Number of independent annealing runs.

        Returns
        -------
        dict
            ``best_spins``, ``best_energy``, ``energies``, ``samples``.
        """
        if _HAS_RUST_QA and model.n_qubits > 10:
            return self._solve_ising_rust(model, num_reads)
        return self._solve_ising_python(model, num_reads)

    def _solve_ising_rust(
        self,
        model: IsingModel,
        num_reads: int,
    ) -> Dict[str, Any]:
        """Rust-accelerated SA path."""
        h_indices = list(model.h.keys())
        h_values = [model.h[i] for i in h_indices]
        j_i = [k[0] for k in model.J]
        j_j = [k[1] for k in model.J]
        j_values = list(model.J.values())

        result = _rust_sa(
            [int(x) for x in h_indices],
            [float(x) for x in h_values],
            [int(x) for x in j_i],
            [int(x) for x in j_j],
            [float(x) for x in j_values],
            int(model.n_qubits),
            float(model.offset),
            int(self._n_sweeps),
            int(num_reads),
            float(self._beta_start),
            float(self._beta_end),
            42,
        )

        best_spins_list = result["best_spins"]
        best_spins = {i: int(s) for i, s in enumerate(best_spins_list)}

        samples = []
        for sample_list in result.get("samples", []):
            samples.append({i: int(s) for i, s in enumerate(sample_list)})

        return {
            "best_spins": best_spins,
            "best_energy": result["best_energy"],
            "energies": result.get("energies", []),
            "samples": samples,
            "n_sweeps": self._n_sweeps,
            "num_reads": num_reads,
            "backend": "rust",
        }

    def _solve_ising_python(
        self,
        model: IsingModel,
        num_reads: int,
    ) -> Dict[str, Any]:
        """Pure-Python SA fallback."""
        n = model.n_qubits
        best_energy = float("inf")
        best_spins: Dict[int, int] = {}
        all_energies: list[float] = []
        all_samples: list[Dict[int, int]] = []

        for _ in range(num_reads):
            spins = {i: int(self._rng.choice([-1, 1])) for i in range(n)}
            energy = model.energy(spins)

            for sweep in range(self._n_sweeps):
                beta = self._beta_start * (
                    (self._beta_end / self._beta_start) ** (sweep / max(self._n_sweeps - 1, 1))
                )

                for qubit in range(n):
                    # ΔE for flipping s_q → -s_q is
                    #   ΔE = −2·s_q·(h_q + Σ_k J_qk·s_k).
                    local_field = model.h.get(qubit, 0.0)
                    for (i, j), jij in model.J.items():
                        if i == qubit:
                            local_field += jij * spins.get(j, 1)
                        elif j == qubit:
                            local_field += jij * spins.get(i, 1)
                    de = -2.0 * spins[qubit] * local_field

                    if de < 0 or self._rng.random() < math.exp(-beta * de):
                        spins[qubit] *= -1
                        energy += de

            all_energies.append(energy)
            all_samples.append(dict(spins))

            if energy < best_energy:
                best_energy = energy
                best_spins = dict(spins)

        return {
            "best_spins": best_spins,
            "best_energy": best_energy,
            "energies": all_energies,
            "samples": all_samples,
            "n_sweeps": self._n_sweeps,
            "num_reads": num_reads,
            "backend": "python",
        }

    def solve_qubo(
        self,
        model: QUBOModel,
        num_reads: int = 10,
    ) -> Dict[str, Any]:
        """Solve a QUBO model via simulated annealing.

        Converts to Ising internally, solves, then maps back to binary.
        """
        ising = model.to_ising()
        result = self.solve_ising(ising, num_reads=num_reads)

        # Convert spins → bits
        best_bits = {i: (s + 1) // 2 for i, s in result["best_spins"].items()}
        samples_bits = [
            {i: (s + 1) // 2 for i, s in sample.items()} for sample in result["samples"]
        ]

        return {
            "best_bits": best_bits,
            "best_energy": model.energy(best_bits),
            "energies": [model.energy(s) for s in samples_bits],
            "samples": samples_bits,
            "n_sweeps": self._n_sweeps,
            "num_reads": num_reads,
        }

solve_ising(model, num_reads=10)

Solve an Ising model via simulated annealing.

Delegates to Rust engine when available (100×+ speedup for models with >20 qubits).

Parameters

model : IsingModel The Ising model to solve. num_reads : int Number of independent annealing runs.

Returns

dict best_spins, best_energy, energies, samples.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
def solve_ising(
    self,
    model: IsingModel,
    num_reads: int = 10,
) -> Dict[str, Any]:
    """Solve an Ising model via simulated annealing.

    Delegates to Rust engine when available (100×+ speedup
    for models with >20 qubits).

    Parameters
    ----------
    model : IsingModel
        The Ising model to solve.
    num_reads : int
        Number of independent annealing runs.

    Returns
    -------
    dict
        ``best_spins``, ``best_energy``, ``energies``, ``samples``.
    """
    if _HAS_RUST_QA and model.n_qubits > 10:
        return self._solve_ising_rust(model, num_reads)
    return self._solve_ising_python(model, num_reads)

solve_qubo(model, num_reads=10)

Solve a QUBO model via simulated annealing.

Converts to Ising internally, solves, then maps back to binary.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
def solve_qubo(
    self,
    model: QUBOModel,
    num_reads: int = 10,
) -> Dict[str, Any]:
    """Solve a QUBO model via simulated annealing.

    Converts to Ising internally, solves, then maps back to binary.
    """
    ising = model.to_ising()
    result = self.solve_ising(ising, num_reads=num_reads)

    # Convert spins → bits
    best_bits = {i: (s + 1) // 2 for i, s in result["best_spins"].items()}
    samples_bits = [
        {i: (s + 1) // 2 for i, s in sample.items()} for sample in result["samples"]
    ]

    return {
        "best_bits": best_bits,
        "best_energy": model.energy(best_bits),
        "energies": [model.energy(s) for s in samples_bits],
        "samples": samples_bits,
        "n_sweeps": self._n_sweeps,
        "num_reads": num_reads,
    }

DWaveInterface

Interface to D-Wave quantum annealer via Ocean SDK.

Wraps DWaveSampler + EmbeddingComposite for transparent minor-embedding. Falls back to simulated annealing if no QPU is available.

Parameters

chain_strength : float Chain strength for embedding (default 2.0). num_reads : int Number of QPU reads (default 1000). annealing_time_us : float Annealing time in microseconds (default 20.0).

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
class DWaveInterface:
    """Interface to D-Wave quantum annealer via Ocean SDK.

    Wraps ``DWaveSampler`` + ``EmbeddingComposite`` for transparent
    minor-embedding. Falls back to simulated annealing if no QPU
    is available.

    Parameters
    ----------
    chain_strength : float
        Chain strength for embedding (default 2.0).
    num_reads : int
        Number of QPU reads (default 1000).
    annealing_time_us : float
        Annealing time in microseconds (default 20.0).
    """

    def __init__(
        self,
        chain_strength: float = _DEFAULT_CHAIN_STRENGTH,
        num_reads: int = _DEFAULT_NUM_READS,
        annealing_time_us: float = _DEFAULT_ANNEALING_TIME_US,
    ) -> None:
        self._chain_strength = chain_strength
        self._num_reads = num_reads
        self._annealing_time_us = annealing_time_us

    @property
    def available(self) -> bool:
        """Whether D-Wave SDK is available."""
        return _HAS_DWAVE and _HAS_DIMOD

    def solve_ising(self, model: IsingModel) -> Dict[str, Any]:
        """Submit Ising model to D-Wave QPU.

        Falls back to SimulatedAnnealer if D-Wave unavailable.
        """
        if not self.available:
            sa = SimulatedAnnealer()
            result = sa.solve_ising(model, num_reads=min(self._num_reads, 20))
            result["backend"] = "simulated_annealing_fallback"
            return result

        bqm = dimod.BinaryQuadraticModel(model.h, model.J, model.offset, "SPIN")
        sampler = EmbeddingComposite(DWaveSampler())
        response = sampler.sample(
            bqm,
            num_reads=self._num_reads,
            chain_strength=self._chain_strength,
            annealing_time=self._annealing_time_us,
        )

        best = response.first
        return {
            "best_spins": dict(best.sample),
            "best_energy": best.energy,
            "num_reads": self._num_reads,
            "backend": "dwave_qpu",
            "timing": getattr(response, "info", {}).get("timing", {}),
        }

available property

Whether D-Wave SDK is available.

solve_ising(model)

Submit Ising model to D-Wave QPU.

Falls back to SimulatedAnnealer if D-Wave unavailable.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
def solve_ising(self, model: IsingModel) -> Dict[str, Any]:
    """Submit Ising model to D-Wave QPU.

    Falls back to SimulatedAnnealer if D-Wave unavailable.
    """
    if not self.available:
        sa = SimulatedAnnealer()
        result = sa.solve_ising(model, num_reads=min(self._num_reads, 20))
        result["backend"] = "simulated_annealing_fallback"
        return result

    bqm = dimod.BinaryQuadraticModel(model.h, model.J, model.offset, "SPIN")
    sampler = EmbeddingComposite(DWaveSampler())
    response = sampler.sample(
        bqm,
        num_reads=self._num_reads,
        chain_strength=self._chain_strength,
        annealing_time=self._annealing_time_us,
    )

    best = response.first
    return {
        "best_spins": dict(best.sample),
        "best_energy": best.energy,
        "num_reads": self._num_reads,
        "backend": "dwave_qpu",
        "timing": getattr(response, "info", {}).get("timing", {}),
    }

EnergyLandscape

Analyze the energy landscape of an Ising model.

Computes energy statistics, degeneracy, spectral gap, and partition function (for small models).

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
class EnergyLandscape:
    """Analyze the energy landscape of an Ising model.

    Computes energy statistics, degeneracy, spectral gap, and
    partition function (for small models).
    """

    def analyze(
        self,
        model: IsingModel,
        samples: list[Dict[int, int]] | None = None,
    ) -> Dict[str, Any]:
        """Run landscape analysis.

        Parameters
        ----------
        model : IsingModel
            The model to analyze.
        samples : list[dict] | None
            Optional pre-computed samples. If None, enumerates
            (for n ≤ 20) or samples randomly.

        Returns
        -------
        dict
            ``min_energy``, ``max_energy``, ``mean_energy``,
            ``spectral_gap``, ``degeneracy``, ``n_unique_energies``.
        """
        if samples is None:
            if model.n_qubits <= 20:
                samples = self._enumerate_all(model.n_qubits)
            else:
                rng = np.random.default_rng(42)
                samples = [
                    {i: int(rng.choice([-1, 1])) for i in range(model.n_qubits)}
                    for _ in range(10000)
                ]

        if _HAS_RUST_QA and len(samples) > 100:
            h_indices = list(model.h.keys())
            h_values = [model.h[i] for i in h_indices]
            j_i = [k[0] for k in model.J]
            j_j = [k[1] for k in model.J]
            j_values = list(model.J.values())
            spin_matrix = [[s.get(i, 1) for i in range(model.n_qubits)] for s in samples]
            energies = _rust_batch_energy(
                [int(x) for x in h_indices],
                [float(x) for x in h_values],
                [int(x) for x in j_i],
                [int(x) for x in j_j],
                [float(x) for x in j_values],
                spin_matrix,
                float(model.offset),
            )
        else:
            energies = [model.energy(s) for s in samples]
        energies_sorted = sorted(set(energies))

        min_e = energies_sorted[0]
        degeneracy = energies.count(min_e)
        spectral_gap = energies_sorted[1] - energies_sorted[0] if len(energies_sorted) > 1 else 0.0

        return {
            "min_energy": min_e,
            "max_energy": max(energies),
            "mean_energy": float(np.mean(energies)),
            "std_energy": float(np.std(energies)),
            "spectral_gap": spectral_gap,
            "degeneracy": degeneracy,
            "n_unique_energies": len(energies_sorted),
            "n_samples": len(samples),
        }

    @staticmethod
    def _enumerate_all(n: int) -> list[Dict[int, int]]:
        """Enumerate all 2^n spin configurations."""
        configs: list[Dict[int, int]] = []
        for bits in range(2**n):
            config = {}
            for i in range(n):
                config[i] = 1 if (bits >> i) & 1 else -1
            configs.append(config)
        return configs

analyze(model, samples=None)

Run landscape analysis.

Parameters

model : IsingModel The model to analyze. samples : list[dict] | None Optional pre-computed samples. If None, enumerates (for n ≤ 20) or samples randomly.

Returns

dict min_energy, max_energy, mean_energy, spectral_gap, degeneracy, n_unique_energies.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
def analyze(
    self,
    model: IsingModel,
    samples: list[Dict[int, int]] | None = None,
) -> Dict[str, Any]:
    """Run landscape analysis.

    Parameters
    ----------
    model : IsingModel
        The model to analyze.
    samples : list[dict] | None
        Optional pre-computed samples. If None, enumerates
        (for n ≤ 20) or samples randomly.

    Returns
    -------
    dict
        ``min_energy``, ``max_energy``, ``mean_energy``,
        ``spectral_gap``, ``degeneracy``, ``n_unique_energies``.
    """
    if samples is None:
        if model.n_qubits <= 20:
            samples = self._enumerate_all(model.n_qubits)
        else:
            rng = np.random.default_rng(42)
            samples = [
                {i: int(rng.choice([-1, 1])) for i in range(model.n_qubits)}
                for _ in range(10000)
            ]

    if _HAS_RUST_QA and len(samples) > 100:
        h_indices = list(model.h.keys())
        h_values = [model.h[i] for i in h_indices]
        j_i = [k[0] for k in model.J]
        j_j = [k[1] for k in model.J]
        j_values = list(model.J.values())
        spin_matrix = [[s.get(i, 1) for i in range(model.n_qubits)] for s in samples]
        energies = _rust_batch_energy(
            [int(x) for x in h_indices],
            [float(x) for x in h_values],
            [int(x) for x in j_i],
            [int(x) for x in j_j],
            [float(x) for x in j_values],
            spin_matrix,
            float(model.offset),
        )
    else:
        energies = [model.energy(s) for s in samples]
    energies_sorted = sorted(set(energies))

    min_e = energies_sorted[0]
    degeneracy = energies.count(min_e)
    spectral_gap = energies_sorted[1] - energies_sorted[0] if len(energies_sorted) > 1 else 0.0

    return {
        "min_energy": min_e,
        "max_energy": max(energies),
        "mean_energy": float(np.mean(energies)),
        "std_energy": float(np.std(energies)),
        "spectral_gap": spectral_gap,
        "degeneracy": degeneracy,
        "n_unique_energies": len(energies_sorted),
        "n_samples": len(samples),
    }

EmbeddingAnalyzer

Analyze embedding requirements for D-Wave hardware.

Computes logical-to-physical qubit ratios, chain length statistics, and connectivity requirements.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
class EmbeddingAnalyzer:
    """Analyze embedding requirements for D-Wave hardware.

    Computes logical-to-physical qubit ratios, chain length
    statistics, and connectivity requirements.
    """

    def analyze(self, model: IsingModel) -> Dict[str, Any]:
        """Analyze embedding requirements.

        Returns
        -------
        dict
            ``n_logical_qubits``, ``n_couplers``, ``density``,
            ``max_degree``, ``min_chain_estimate``.
        """
        n = model.n_qubits
        n_couplers = len(model.J)
        max_possible = n * (n - 1) // 2
        density = n_couplers / max(max_possible, 1)

        # Degree per qubit
        degree: Dict[int, int] = {i: 0 for i in range(n)}
        for i, j in model.J:
            degree[i] = degree.get(i, 0) + 1
            degree[j] = degree.get(j, 0) + 1

        max_degree = max(degree.values()) if degree else 0

        # Chimera/Pegasus has ~6/15 connections per physical qubit
        # Chain length estimate: ceil(degree / hardware_connectivity)
        pegasus_connectivity = 15
        min_chain = max(1, math.ceil(max_degree / pegasus_connectivity))

        return {
            "n_logical_qubits": n,
            "n_couplers": n_couplers,
            "density": density,
            "max_degree": max_degree,
            "mean_degree": float(np.mean(list(degree.values()))) if degree else 0.0,
            "min_chain_estimate": min_chain,
            "estimated_physical_qubits": n * min_chain,
            "pegasus_compatible": n * min_chain <= 5000,
        }

analyze(model)

Analyze embedding requirements.

Returns

dict n_logical_qubits, n_couplers, density, max_degree, min_chain_estimate.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
def analyze(self, model: IsingModel) -> Dict[str, Any]:
    """Analyze embedding requirements.

    Returns
    -------
    dict
        ``n_logical_qubits``, ``n_couplers``, ``density``,
        ``max_degree``, ``min_chain_estimate``.
    """
    n = model.n_qubits
    n_couplers = len(model.J)
    max_possible = n * (n - 1) // 2
    density = n_couplers / max(max_possible, 1)

    # Degree per qubit
    degree: Dict[int, int] = {i: 0 for i in range(n)}
    for i, j in model.J:
        degree[i] = degree.get(i, 0) + 1
        degree[j] = degree.get(j, 0) + 1

    max_degree = max(degree.values()) if degree else 0

    # Chimera/Pegasus has ~6/15 connections per physical qubit
    # Chain length estimate: ceil(degree / hardware_connectivity)
    pegasus_connectivity = 15
    min_chain = max(1, math.ceil(max_degree / pegasus_connectivity))

    return {
        "n_logical_qubits": n,
        "n_couplers": n_couplers,
        "density": density,
        "max_degree": max_degree,
        "mean_degree": float(np.mean(list(degree.values()))) if degree else 0.0,
        "min_chain_estimate": min_chain,
        "estimated_physical_qubits": n * min_chain,
        "pegasus_compatible": n * min_chain <= 5000,
    }

HardwareGraph

D-Wave hardware graph topology model.

Generates adjacency structure for Chimera, Pegasus, and Zephyr topologies to enable embedding feasibility analysis.

Parameters

topology : str One of chimera, pegasus, zephyr. size : int Topology size parameter (M for Chimera M×M×4, M for Pegasus P(M), M for Zephyr Z(M)).

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
class HardwareGraph:
    """D-Wave hardware graph topology model.

    Generates adjacency structure for Chimera, Pegasus, and Zephyr
    topologies to enable embedding feasibility analysis.

    Parameters
    ----------
    topology : str
        One of ``chimera``, ``pegasus``, ``zephyr``.
    size : int
        Topology size parameter (M for Chimera M×M×4,
        M for Pegasus P(M), M for Zephyr Z(M)).
    """

    _TOPOLOGIES = {
        "chimera": {"connectivity": 6, "base_qubits_per_cell": 8},
        "pegasus": {"connectivity": 15, "base_qubits_per_cell": 24},
        "zephyr": {"connectivity": 20, "base_qubits_per_cell": 48},
    }

    def __init__(self, topology: str = "pegasus", size: int = 16) -> None:
        if topology not in self._TOPOLOGIES:
            raise ValueError(f"Unknown topology: {topology}")
        self._topology = topology
        self._size = size
        self._props = self._TOPOLOGIES[topology]

    @property
    def n_physical_qubits(self) -> int:
        """Total physical qubits in this hardware graph."""
        if self._topology == "chimera":
            return self._size * self._size * 8
        elif self._topology == "pegasus":
            return 24 * self._size * (self._size - 1)
        else:  # zephyr
            return 48 * self._size * self._size

    @property
    def connectivity(self) -> int:
        """Per-qubit connectivity."""
        return self._props["connectivity"]

    def can_embed(self, model: IsingModel) -> Dict[str, Any]:
        """Check whether a model can be embedded on this hardware.

        Returns
        -------
        dict
            ``embeddable``, ``n_logical``, ``n_physical_available``,
            ``estimated_physical_needed``, ``utilization_pct``.
        """
        n = model.n_qubits
        n_couplers = len(model.J)

        # Degree estimate
        degree: Dict[int, int] = {}
        for i, j in model.J:
            degree[i] = degree.get(i, 0) + 1
            degree[j] = degree.get(j, 0) + 1

        max_deg = max(degree.values()) if degree else 0
        chain_est = max(1, math.ceil(max_deg / self.connectivity))
        physical_needed = n * chain_est

        return {
            "embeddable": physical_needed <= self.n_physical_qubits,
            "topology": self._topology,
            "size": self._size,
            "n_logical": n,
            "n_couplers": n_couplers,
            "max_degree": max_deg,
            "chain_length_estimate": chain_est,
            "n_physical_available": self.n_physical_qubits,
            "estimated_physical_needed": physical_needed,
            "utilization_pct": physical_needed / max(self.n_physical_qubits, 1) * 100,
        }

n_physical_qubits property

Total physical qubits in this hardware graph.

connectivity property

Per-qubit connectivity.

can_embed(model)

Check whether a model can be embedded on this hardware.

Returns

dict embeddable, n_logical, n_physical_available, estimated_physical_needed, utilization_pct.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
def can_embed(self, model: IsingModel) -> Dict[str, Any]:
    """Check whether a model can be embedded on this hardware.

    Returns
    -------
    dict
        ``embeddable``, ``n_logical``, ``n_physical_available``,
        ``estimated_physical_needed``, ``utilization_pct``.
    """
    n = model.n_qubits
    n_couplers = len(model.J)

    # Degree estimate
    degree: Dict[int, int] = {}
    for i, j in model.J:
        degree[i] = degree.get(i, 0) + 1
        degree[j] = degree.get(j, 0) + 1

    max_deg = max(degree.values()) if degree else 0
    chain_est = max(1, math.ceil(max_deg / self.connectivity))
    physical_needed = n * chain_est

    return {
        "embeddable": physical_needed <= self.n_physical_qubits,
        "topology": self._topology,
        "size": self._size,
        "n_logical": n,
        "n_couplers": n_couplers,
        "max_degree": max_deg,
        "chain_length_estimate": chain_est,
        "n_physical_available": self.n_physical_qubits,
        "estimated_physical_needed": physical_needed,
        "utilization_pct": physical_needed / max(self.n_physical_qubits, 1) * 100,
    }

ChainBreakResolver

Post-process D-Wave samples to repair broken chains.

When a logical qubit is embedded as a chain of physical qubits, some physical qubits in the chain may disagree. This class resolves disagreements using majority vote or energy minimization.

Parameters

method : str Resolution method: majority_vote or minimize_energy.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
class ChainBreakResolver:
    """Post-process D-Wave samples to repair broken chains.

    When a logical qubit is embedded as a chain of physical qubits,
    some physical qubits in the chain may disagree. This class
    resolves disagreements using majority vote or energy minimization.

    Parameters
    ----------
    method : str
        Resolution method: ``majority_vote`` or ``minimize_energy``.
    """

    def __init__(self, method: str = "majority_vote") -> None:
        if method not in ("majority_vote", "minimize_energy"):
            raise ValueError(f"Unknown method: {method}")
        self._method = method

    def resolve(
        self,
        physical_samples: list[Dict[int, int]],
        chains: Dict[int, list[int]],
        model: IsingModel | None = None,
    ) -> list[Dict[int, int]]:
        """Resolve chain breaks in physical samples.

        Parameters
        ----------
        physical_samples : list[dict]
            Raw physical qubit samples.
        chains : dict[int, list[int]]
            Logical qubit → list of physical qubit indices.
        model : IsingModel | None
            Required for ``minimize_energy`` method.

        Returns
        -------
        list[dict]
            Resolved logical-qubit samples.
        """
        resolved: list[Dict[int, int]] = []

        for sample in physical_samples:
            logical: Dict[int, int] = {}
            for logical_q, physical_qs in chains.items():
                votes = [sample.get(pq, 1) for pq in physical_qs]

                if self._method == "majority_vote":
                    total = sum(votes)
                    logical[logical_q] = 1 if total >= 0 else -1
                else:
                    # Try both orientations, pick lower energy
                    logical[logical_q] = 1 if sum(votes) >= 0 else -1

            if self._method == "minimize_energy" and model is not None:
                # Local search refinement
                energy = model.energy(logical)
                for q in logical:
                    flipped = dict(logical)
                    flipped[q] *= -1
                    e_flip = model.energy(flipped)
                    if e_flip < energy:
                        logical[q] *= -1
                        energy = e_flip

            resolved.append(logical)

        return resolved

    def analyze_breaks(
        self,
        physical_samples: list[Dict[int, int]],
        chains: Dict[int, list[int]],
    ) -> Dict[str, Any]:
        """Analyze chain break statistics.

        Returns
        -------
        dict
            ``total_breaks``, ``break_rate``, ``per_chain``.
        """
        total_breaks = 0
        total_chains = 0
        per_chain: Dict[int, float] = {}

        for logical_q, physical_qs in chains.items():
            if len(physical_qs) <= 1:
                per_chain[logical_q] = 0.0
                continue

            breaks = 0
            for sample in physical_samples:
                votes = [sample.get(pq, 1) for pq in physical_qs]
                if len(set(votes)) > 1:
                    breaks += 1

            rate = breaks / max(len(physical_samples), 1)
            per_chain[logical_q] = rate
            total_breaks += breaks
            total_chains += 1

        n_total = total_chains * max(len(physical_samples), 1)
        return {
            "total_breaks": total_breaks,
            "break_rate": total_breaks / max(n_total, 1),
            "per_chain": per_chain,
            "n_chains": len(chains),
        }

resolve(physical_samples, chains, model=None)

Resolve chain breaks in physical samples.

Parameters

physical_samples : list[dict] Raw physical qubit samples. chains : dict[int, list[int]] Logical qubit → list of physical qubit indices. model : IsingModel | None Required for minimize_energy method.

Returns

list[dict] Resolved logical-qubit samples.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
def resolve(
    self,
    physical_samples: list[Dict[int, int]],
    chains: Dict[int, list[int]],
    model: IsingModel | None = None,
) -> list[Dict[int, int]]:
    """Resolve chain breaks in physical samples.

    Parameters
    ----------
    physical_samples : list[dict]
        Raw physical qubit samples.
    chains : dict[int, list[int]]
        Logical qubit → list of physical qubit indices.
    model : IsingModel | None
        Required for ``minimize_energy`` method.

    Returns
    -------
    list[dict]
        Resolved logical-qubit samples.
    """
    resolved: list[Dict[int, int]] = []

    for sample in physical_samples:
        logical: Dict[int, int] = {}
        for logical_q, physical_qs in chains.items():
            votes = [sample.get(pq, 1) for pq in physical_qs]

            if self._method == "majority_vote":
                total = sum(votes)
                logical[logical_q] = 1 if total >= 0 else -1
            else:
                # Try both orientations, pick lower energy
                logical[logical_q] = 1 if sum(votes) >= 0 else -1

        if self._method == "minimize_energy" and model is not None:
            # Local search refinement
            energy = model.energy(logical)
            for q in logical:
                flipped = dict(logical)
                flipped[q] *= -1
                e_flip = model.energy(flipped)
                if e_flip < energy:
                    logical[q] *= -1
                    energy = e_flip

        resolved.append(logical)

    return resolved

analyze_breaks(physical_samples, chains)

Analyze chain break statistics.

Returns

dict total_breaks, break_rate, per_chain.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
def analyze_breaks(
    self,
    physical_samples: list[Dict[int, int]],
    chains: Dict[int, list[int]],
) -> Dict[str, Any]:
    """Analyze chain break statistics.

    Returns
    -------
    dict
        ``total_breaks``, ``break_rate``, ``per_chain``.
    """
    total_breaks = 0
    total_chains = 0
    per_chain: Dict[int, float] = {}

    for logical_q, physical_qs in chains.items():
        if len(physical_qs) <= 1:
            per_chain[logical_q] = 0.0
            continue

        breaks = 0
        for sample in physical_samples:
            votes = [sample.get(pq, 1) for pq in physical_qs]
            if len(set(votes)) > 1:
                breaks += 1

        rate = breaks / max(len(physical_samples), 1)
        per_chain[logical_q] = rate
        total_breaks += breaks
        total_chains += 1

    n_total = total_chains * max(len(physical_samples), 1)
    return {
        "total_breaks": total_breaks,
        "break_rate": total_breaks / max(n_total, 1),
        "per_chain": per_chain,
        "n_chains": len(chains),
    }

AnnealingSchedule

Custom annealing schedule builder for D-Wave.

Supports linear, pause-and-quench, and reverse annealing protocols.

The schedule is a list of (time_us, s) points where s ∈ [0, 1] is the anneal fraction (0 = transverse field dominant, 1 = problem Hamiltonian dominant).

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
class AnnealingSchedule:
    """Custom annealing schedule builder for D-Wave.

    Supports linear, pause-and-quench, and reverse annealing
    protocols.

    The schedule is a list of (time_us, s) points where s ∈ [0, 1]
    is the anneal fraction (0 = transverse field dominant,
    1 = problem Hamiltonian dominant).
    """

    def __init__(self) -> None:
        self._points: list[tuple[float, float]] = []

    def linear(self, duration_us: float = 20.0) -> "AnnealingSchedule":
        """Standard linear anneal from s=0 to s=1."""
        self._points = [(0.0, 0.0), (duration_us, 1.0)]
        return self

    def pause_and_quench(
        self,
        ramp_time_us: float = 5.0,
        pause_at_s: float = 0.4,
        pause_duration_us: float = 50.0,
        quench_time_us: float = 1.0,
    ) -> "AnnealingSchedule":
        """Pause-and-quench: ramp to s, hold, then quench to s=1."""
        t = 0.0
        self._points = [(t, 0.0)]
        t += ramp_time_us
        self._points.append((t, pause_at_s))
        t += pause_duration_us
        self._points.append((t, pause_at_s))
        t += quench_time_us
        self._points.append((t, 1.0))
        return self

    def reverse(
        self,
        initial_s: float = 1.0,
        reverse_to_s: float = 0.3,
        ramp_time_us: float = 5.0,
        hold_time_us: float = 10.0,
        forward_time_us: float = 5.0,
    ) -> "AnnealingSchedule":
        """Reverse annealing: start at s=1, go back, then forward."""
        t = 0.0
        self._points = [(t, initial_s)]
        t += ramp_time_us
        self._points.append((t, reverse_to_s))
        t += hold_time_us
        self._points.append((t, reverse_to_s))
        t += forward_time_us
        self._points.append((t, 1.0))
        return self

    @property
    def points(self) -> list[tuple[float, float]]:
        """Schedule points as [(time_us, s), ...]."""
        return list(self._points)

    @property
    def total_time_us(self) -> float:
        """Total annealing time in microseconds."""
        return self._points[-1][0] if self._points else 0.0

    def to_dict(self) -> Dict[str, Any]:
        """Export schedule as dict for D-Wave API."""
        return {
            "schedule": self._points,
            "total_time_us": self.total_time_us,
            "n_points": len(self._points),
        }

points property

Schedule points as [(time_us, s), ...].

total_time_us property

Total annealing time in microseconds.

linear(duration_us=20.0)

Standard linear anneal from s=0 to s=1.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1136
1137
1138
1139
def linear(self, duration_us: float = 20.0) -> "AnnealingSchedule":
    """Standard linear anneal from s=0 to s=1."""
    self._points = [(0.0, 0.0), (duration_us, 1.0)]
    return self

pause_and_quench(ramp_time_us=5.0, pause_at_s=0.4, pause_duration_us=50.0, quench_time_us=1.0)

Pause-and-quench: ramp to s, hold, then quench to s=1.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
def pause_and_quench(
    self,
    ramp_time_us: float = 5.0,
    pause_at_s: float = 0.4,
    pause_duration_us: float = 50.0,
    quench_time_us: float = 1.0,
) -> "AnnealingSchedule":
    """Pause-and-quench: ramp to s, hold, then quench to s=1."""
    t = 0.0
    self._points = [(t, 0.0)]
    t += ramp_time_us
    self._points.append((t, pause_at_s))
    t += pause_duration_us
    self._points.append((t, pause_at_s))
    t += quench_time_us
    self._points.append((t, 1.0))
    return self

reverse(initial_s=1.0, reverse_to_s=0.3, ramp_time_us=5.0, hold_time_us=10.0, forward_time_us=5.0)

Reverse annealing: start at s=1, go back, then forward.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
def reverse(
    self,
    initial_s: float = 1.0,
    reverse_to_s: float = 0.3,
    ramp_time_us: float = 5.0,
    hold_time_us: float = 10.0,
    forward_time_us: float = 5.0,
) -> "AnnealingSchedule":
    """Reverse annealing: start at s=1, go back, then forward."""
    t = 0.0
    self._points = [(t, initial_s)]
    t += ramp_time_us
    self._points.append((t, reverse_to_s))
    t += hold_time_us
    self._points.append((t, reverse_to_s))
    t += forward_time_us
    self._points.append((t, 1.0))
    return self

to_dict()

Export schedule as dict for D-Wave API.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1188
1189
1190
1191
1192
1193
1194
def to_dict(self) -> Dict[str, Any]:
    """Export schedule as dict for D-Wave API."""
    return {
        "schedule": self._points,
        "total_time_us": self.total_time_us,
        "n_points": len(self._points),
    }

GaugeTransform

Random gauge transformations for improved sampling.

Applies random spin-flip transformations (g_i ∈ {+1, -1}) to the Ising model: h'_i = g_i · h_i, J'_ij = g_i · g_j · J_ij. This breaks systematic QPU biases without changing the energy landscape.

Parameters

n_gauges : int Number of gauge transforms to apply (default 10). seed : int Random seed.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
class GaugeTransform:
    """Random gauge transformations for improved sampling.

    Applies random spin-flip transformations (g_i ∈ {+1, -1}) to the
    Ising model: h'_i = g_i · h_i, J'_ij = g_i · g_j · J_ij.
    This breaks systematic QPU biases without changing the energy
    landscape.

    Parameters
    ----------
    n_gauges : int
        Number of gauge transforms to apply (default 10).
    seed : int
        Random seed.
    """

    def __init__(self, n_gauges: int = 10, seed: int = 42) -> None:
        self._n_gauges = n_gauges
        self._rng = np.random.default_rng(seed)

    def transform(self, model: IsingModel) -> list[IsingModel]:
        """Generate gauge-transformed copies of the model.

        Returns
        -------
        list[IsingModel]
            List of gauge-transformed models.
        """
        transforms: list[IsingModel] = []

        for g_idx in range(self._n_gauges):
            # Random gauge vector
            gauge = {i: int(self._rng.choice([-1, 1])) for i in range(model.n_qubits)}

            h_new = {i: gauge[i] * hi for i, hi in model.h.items()}
            j_new = {
                (i, j): gauge.get(i, 1) * gauge.get(j, 1) * jij for (i, j), jij in model.J.items()
            }

            transforms.append(
                IsingModel(
                    h=h_new,
                    J=j_new,
                    offset=model.offset,
                    qubit_labels=dict(model.qubit_labels),
                    n_qubits=model.n_qubits,
                    source=f"{model.source}_gauge{g_idx}",
                )
            )

        return transforms

    def untransform_sample(
        self,
        sample: Dict[int, int],
        gauge: Dict[int, int],
    ) -> Dict[int, int]:
        """Undo gauge transform on a sample.

        Parameters
        ----------
        sample : dict
            Transformed spin assignment.
        gauge : dict
            Gauge vector used for the transform.

        Returns
        -------
        dict
            Original-frame spin assignment.
        """
        return {i: s * gauge.get(i, 1) for i, s in sample.items()}

transform(model)

Generate gauge-transformed copies of the model.

Returns

list[IsingModel] List of gauge-transformed models.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
def transform(self, model: IsingModel) -> list[IsingModel]:
    """Generate gauge-transformed copies of the model.

    Returns
    -------
    list[IsingModel]
        List of gauge-transformed models.
    """
    transforms: list[IsingModel] = []

    for g_idx in range(self._n_gauges):
        # Random gauge vector
        gauge = {i: int(self._rng.choice([-1, 1])) for i in range(model.n_qubits)}

        h_new = {i: gauge[i] * hi for i, hi in model.h.items()}
        j_new = {
            (i, j): gauge.get(i, 1) * gauge.get(j, 1) * jij for (i, j), jij in model.J.items()
        }

        transforms.append(
            IsingModel(
                h=h_new,
                J=j_new,
                offset=model.offset,
                qubit_labels=dict(model.qubit_labels),
                n_qubits=model.n_qubits,
                source=f"{model.source}_gauge{g_idx}",
            )
        )

    return transforms

untransform_sample(sample, gauge)

Undo gauge transform on a sample.

Parameters

sample : dict Transformed spin assignment. gauge : dict Gauge vector used for the transform.

Returns

dict Original-frame spin assignment.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
def untransform_sample(
    self,
    sample: Dict[int, int],
    gauge: Dict[int, int],
) -> Dict[int, int]:
    """Undo gauge transform on a sample.

    Parameters
    ----------
    sample : dict
        Transformed spin assignment.
    gauge : dict
        Gauge vector used for the transform.

    Returns
    -------
    dict
        Original-frame spin assignment.
    """
    return {i: s * gauge.get(i, 1) for i, s in sample.items()}

SCBitstreamQUBO

SC-specific QUBO formulations for bitstream optimization.

Provides problem-specific encodings for common SC optimization tasks: - Weight optimization: Find binary weight mask that minimizes network error. - Pruning: Select minimal subset of connections preserving accuracy. - Topology search: Binary selection of connections from a candidate set.

Parameters

penalty : float Constraint violation penalty (default 5.0).

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
class SCBitstreamQUBO:
    """SC-specific QUBO formulations for bitstream optimization.

    Provides problem-specific encodings for common SC optimization
    tasks:
    - **Weight optimization**: Find binary weight mask that minimizes
      network error.
    - **Pruning**: Select minimal subset of connections preserving
      accuracy.
    - **Topology search**: Binary selection of connections from a
      candidate set.

    Parameters
    ----------
    penalty : float
        Constraint violation penalty (default 5.0).
    """

    def __init__(self, penalty: float = 5.0) -> None:
        self._penalty = penalty

    def weight_optimization(
        self,
        target_output: np.ndarray[Any, Any],
        candidate_weights: np.ndarray[Any, Any],
        n_bits: int = 8,
    ) -> QUBOModel:
        """Formulate weight optimization as QUBO.

        Find binary vector x ∈ {0,1}^n that minimizes
        ||target - candidate_weights @ x||².

        Parameters
        ----------
        target_output : np.ndarray
            Desired output vector (m,).
        candidate_weights : np.ndarray
            Weight matrix (m × n).
        n_bits : int
            Number of binary decision variables.

        Returns
        -------
        QUBOModel
        """
        W = candidate_weights
        y = target_output

        # QUBO: x^T (W^T W) x - 2 y^T W x + y^T y
        # Q_ij = (W^T W)_ij for off-diagonal
        # Q_ii = (W^T W)_ii - 2 (y^T W)_i
        WtW = W.T @ W
        Wty = W.T @ y
        n = min(WtW.shape[0], n_bits)

        q_matrix: Dict[tuple[int, int], float] = {}
        for i in range(n):
            q_matrix[(i, i)] = float(WtW[i, i] - 2.0 * Wty[i])
            for j in range(i + 1, n):
                val = float(WtW[i, j] + WtW[j, i])
                if abs(val) > 1e-12:
                    q_matrix[(i, j)] = val

        return QUBOModel(
            Q=q_matrix,
            offset=float(y @ y),
            n_qubits=n,
            source="sc_weight_optimization",
        )

    def pruning(
        self,
        adjacency: np.ndarray[Any, Any],
        importance_scores: np.ndarray[Any, Any],
        max_connections: int,
    ) -> QUBOModel:
        """Formulate network pruning as QUBO.

        Parameters
        ----------
        adjacency : np.ndarray
            N×N weight matrix (connections to consider).
        importance_scores : np.ndarray
            N×N importance scores (higher = more important).
        max_connections : int
            Maximum number of connections to keep.

        Returns
        -------
        QUBOModel
        """
        n = adjacency.shape[0]
        # Create binary variable per edge
        edges: list[tuple[int, int]] = []
        for i in range(n):
            for j in range(i + 1, n):
                if abs(adjacency[i, j]) > 1e-12:
                    edges.append((i, j))

        ne = len(edges)
        q_matrix: Dict[tuple[int, int], float] = {}

        # Objective: maximize importance (minimize negative importance)
        for k, (i, j) in enumerate(edges):
            q_matrix[(k, k)] = -float(importance_scores[i, j])

        # Constraint: sum(x) = max_connections
        # Penalty: P * (sum(x) - K)^2
        for k1 in range(ne):
            q_matrix[(k1, k1)] = q_matrix.get((k1, k1), 0.0) + self._penalty * (
                1 - 2 * max_connections
            )
            for k2 in range(k1 + 1, ne):
                q_matrix[(k1, k2)] = q_matrix.get((k1, k2), 0.0) + 2 * self._penalty

        return QUBOModel(
            Q=q_matrix,
            offset=self._penalty * max_connections**2,
            n_qubits=ne,
            source="sc_pruning",
        )

weight_optimization(target_output, candidate_weights, n_bits=8)

Formulate weight optimization as QUBO.

Find binary vector x ∈ {0,1}^n that minimizes ||target - candidate_weights @ x||².

Parameters

target_output : np.ndarray Desired output vector (m,). candidate_weights : np.ndarray Weight matrix (m × n). n_bits : int Number of binary decision variables.

Returns

QUBOModel

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
def weight_optimization(
    self,
    target_output: np.ndarray[Any, Any],
    candidate_weights: np.ndarray[Any, Any],
    n_bits: int = 8,
) -> QUBOModel:
    """Formulate weight optimization as QUBO.

    Find binary vector x ∈ {0,1}^n that minimizes
    ||target - candidate_weights @ x||².

    Parameters
    ----------
    target_output : np.ndarray
        Desired output vector (m,).
    candidate_weights : np.ndarray
        Weight matrix (m × n).
    n_bits : int
        Number of binary decision variables.

    Returns
    -------
    QUBOModel
    """
    W = candidate_weights
    y = target_output

    # QUBO: x^T (W^T W) x - 2 y^T W x + y^T y
    # Q_ij = (W^T W)_ij for off-diagonal
    # Q_ii = (W^T W)_ii - 2 (y^T W)_i
    WtW = W.T @ W
    Wty = W.T @ y
    n = min(WtW.shape[0], n_bits)

    q_matrix: Dict[tuple[int, int], float] = {}
    for i in range(n):
        q_matrix[(i, i)] = float(WtW[i, i] - 2.0 * Wty[i])
        for j in range(i + 1, n):
            val = float(WtW[i, j] + WtW[j, i])
            if abs(val) > 1e-12:
                q_matrix[(i, j)] = val

    return QUBOModel(
        Q=q_matrix,
        offset=float(y @ y),
        n_qubits=n,
        source="sc_weight_optimization",
    )

pruning(adjacency, importance_scores, max_connections)

Formulate network pruning as QUBO.

Parameters

adjacency : np.ndarray N×N weight matrix (connections to consider). importance_scores : np.ndarray N×N importance scores (higher = more important). max_connections : int Maximum number of connections to keep.

Returns

QUBOModel

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
def pruning(
    self,
    adjacency: np.ndarray[Any, Any],
    importance_scores: np.ndarray[Any, Any],
    max_connections: int,
) -> QUBOModel:
    """Formulate network pruning as QUBO.

    Parameters
    ----------
    adjacency : np.ndarray
        N×N weight matrix (connections to consider).
    importance_scores : np.ndarray
        N×N importance scores (higher = more important).
    max_connections : int
        Maximum number of connections to keep.

    Returns
    -------
    QUBOModel
    """
    n = adjacency.shape[0]
    # Create binary variable per edge
    edges: list[tuple[int, int]] = []
    for i in range(n):
        for j in range(i + 1, n):
            if abs(adjacency[i, j]) > 1e-12:
                edges.append((i, j))

    ne = len(edges)
    q_matrix: Dict[tuple[int, int], float] = {}

    # Objective: maximize importance (minimize negative importance)
    for k, (i, j) in enumerate(edges):
        q_matrix[(k, k)] = -float(importance_scores[i, j])

    # Constraint: sum(x) = max_connections
    # Penalty: P * (sum(x) - K)^2
    for k1 in range(ne):
        q_matrix[(k1, k1)] = q_matrix.get((k1, k1), 0.0) + self._penalty * (
            1 - 2 * max_connections
        )
        for k2 in range(k1 + 1, ne):
            q_matrix[(k1, k2)] = q_matrix.get((k1, k2), 0.0) + 2 * self._penalty

    return QUBOModel(
        Q=q_matrix,
        offset=self._penalty * max_connections**2,
        n_qubits=ne,
        source="sc_pruning",
    )

SampleAggregator

Post-process and aggregate quantum annealing samples.

Provides filtering, deduplication, energy histogram, and Boltzmann-weighted statistics.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
class SampleAggregator:
    """Post-process and aggregate quantum annealing samples.

    Provides filtering, deduplication, energy histogram, and
    Boltzmann-weighted statistics.
    """

    def aggregate(
        self,
        samples: list[Dict[int, int]],
        energies: list[float],
        temperature: float = 1.0,
    ) -> Dict[str, Any]:
        """Aggregate and analyze sample set.

        Parameters
        ----------
        samples : list[dict]
            Spin/bit configurations.
        energies : list[float]
            Corresponding energies.
        temperature : float
            Temperature for Boltzmann weighting.

        Returns
        -------
        dict
            ``unique_samples``, ``best``, ``histogram``,
            ``boltzmann_avg_energy``, ``success_probability``.
        """
        if not samples:
            return {"unique_samples": 0, "best": {}, "histogram": {}}

        # Sort by energy
        paired = sorted(zip(energies, samples), key=lambda x: x[0])
        best_energy = paired[0][0]
        best_sample = paired[0][1]

        # Unique samples
        seen: set[str] = set()
        unique = 0
        for _, s in paired:
            key = str(sorted(s.items()))
            if key not in seen:
                seen.add(key)
                unique += 1

        # Histogram (bin energies)
        e_arr = np.array(energies)
        n_bins = min(20, len(set(energies)))
        counts, bin_edges = np.histogram(e_arr, bins=max(n_bins, 1))
        histogram = {
            "counts": counts.tolist(),
            "bin_edges": bin_edges.tolist(),
        }

        # Boltzmann-weighted average
        beta = 1.0 / max(temperature, 1e-12)
        min_e = min(energies)
        weights = np.array([math.exp(-beta * (e - min_e)) for e in energies])
        z = float(np.sum(weights))
        boltzmann_avg = float(np.sum(weights * e_arr)) / z if z > 0 else min_e

        # Success probability (fraction at ground state)
        gs_count = sum(1 for e in energies if abs(e - best_energy) < 1e-10)
        success_prob = gs_count / max(len(energies), 1)

        return {
            "unique_samples": unique,
            "total_samples": len(samples),
            "best_sample": best_sample,
            "best_energy": best_energy,
            "mean_energy": float(np.mean(e_arr)),
            "std_energy": float(np.std(e_arr)),
            "boltzmann_avg_energy": boltzmann_avg,
            "success_probability": success_prob,
            "gs_degeneracy": gs_count,
            "histogram": histogram,
        }

aggregate(samples, energies, temperature=1.0)

Aggregate and analyze sample set.

Parameters

samples : list[dict] Spin/bit configurations. energies : list[float] Corresponding energies. temperature : float Temperature for Boltzmann weighting.

Returns

dict unique_samples, best, histogram, boltzmann_avg_energy, success_probability.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
def aggregate(
    self,
    samples: list[Dict[int, int]],
    energies: list[float],
    temperature: float = 1.0,
) -> Dict[str, Any]:
    """Aggregate and analyze sample set.

    Parameters
    ----------
    samples : list[dict]
        Spin/bit configurations.
    energies : list[float]
        Corresponding energies.
    temperature : float
        Temperature for Boltzmann weighting.

    Returns
    -------
    dict
        ``unique_samples``, ``best``, ``histogram``,
        ``boltzmann_avg_energy``, ``success_probability``.
    """
    if not samples:
        return {"unique_samples": 0, "best": {}, "histogram": {}}

    # Sort by energy
    paired = sorted(zip(energies, samples), key=lambda x: x[0])
    best_energy = paired[0][0]
    best_sample = paired[0][1]

    # Unique samples
    seen: set[str] = set()
    unique = 0
    for _, s in paired:
        key = str(sorted(s.items()))
        if key not in seen:
            seen.add(key)
            unique += 1

    # Histogram (bin energies)
    e_arr = np.array(energies)
    n_bins = min(20, len(set(energies)))
    counts, bin_edges = np.histogram(e_arr, bins=max(n_bins, 1))
    histogram = {
        "counts": counts.tolist(),
        "bin_edges": bin_edges.tolist(),
    }

    # Boltzmann-weighted average
    beta = 1.0 / max(temperature, 1e-12)
    min_e = min(energies)
    weights = np.array([math.exp(-beta * (e - min_e)) for e in energies])
    z = float(np.sum(weights))
    boltzmann_avg = float(np.sum(weights * e_arr)) / z if z > 0 else min_e

    # Success probability (fraction at ground state)
    gs_count = sum(1 for e in energies if abs(e - best_energy) < 1e-10)
    success_prob = gs_count / max(len(energies), 1)

    return {
        "unique_samples": unique,
        "total_samples": len(samples),
        "best_sample": best_sample,
        "best_energy": best_energy,
        "mean_energy": float(np.mean(e_arr)),
        "std_energy": float(np.std(e_arr)),
        "boltzmann_avg_energy": boltzmann_avg,
        "success_probability": success_prob,
        "gs_degeneracy": gs_count,
        "histogram": histogram,
    }

SCPrecisionEncoder

Encode SC probability values as qubit configurations.

SC values are continuous probabilities in [0, 1]. Quantum annealers operate on binary variables. This encoder provides three strategies for mapping SC precision to qubits:

  • binary: k qubits encode 2^k levels (compact but coupled)
  • unary: k qubits encode k+1 levels (robust but expensive)
  • one_hot: k qubits encode k levels (good for categorical)
Parameters

encoding : str One of binary, unary, one_hot. n_bits : int Number of qubits per SC value (default 8).

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
class SCPrecisionEncoder:
    """Encode SC probability values as qubit configurations.

    SC values are continuous probabilities in [0, 1]. Quantum
    annealers operate on binary variables. This encoder provides
    three strategies for mapping SC precision to qubits:

    - **binary**: k qubits encode 2^k levels (compact but coupled)
    - **unary**: k qubits encode k+1 levels (robust but expensive)
    - **one_hot**: k qubits encode k levels (good for categorical)

    Parameters
    ----------
    encoding : str
        One of ``binary``, ``unary``, ``one_hot``.
    n_bits : int
        Number of qubits per SC value (default 8).
    """

    def __init__(self, encoding: str = "binary", n_bits: int = 8) -> None:
        if encoding not in ("binary", "unary", "one_hot"):
            raise ValueError(f"Unknown encoding: {encoding}")
        self._encoding = encoding
        self._n_bits = n_bits

    @property
    def n_levels(self) -> int:
        """Number of representable precision levels."""
        if self._encoding == "binary":
            return 2**self._n_bits
        elif self._encoding == "unary":
            return self._n_bits + 1
        else:  # one_hot
            return self._n_bits

    def encode(self, sc_value: float) -> Dict[int, int]:
        """Encode an SC probability as qubit configuration.

        Parameters
        ----------
        sc_value : float
            SC value in [0, 1].

        Returns
        -------
        dict[int, int]
            Qubit index → binary value.
        """
        v = max(0.0, min(1.0, sc_value))

        if self._encoding == "binary":
            level = int(round(v * (2**self._n_bits - 1)))
            return {i: (level >> i) & 1 for i in range(self._n_bits)}
        elif self._encoding == "unary":
            n_ones = int(round(v * self._n_bits))
            return {i: (1 if i < n_ones else 0) for i in range(self._n_bits)}
        else:  # one_hot
            level = int(round(v * (self._n_bits - 1)))
            return {i: (1 if i == level else 0) for i in range(self._n_bits)}

    def decode(self, qubits: Dict[int, int]) -> float:
        """Decode qubit configuration back to SC probability.

        Parameters
        ----------
        qubits : dict[int, int]
            Qubit index → binary value.

        Returns
        -------
        float
            Reconstructed SC value in [0, 1].
        """
        if self._encoding == "binary":
            level = sum(qubits.get(i, 0) << i for i in range(self._n_bits))
            return level / max(2**self._n_bits - 1, 1)
        elif self._encoding == "unary":
            n_ones = sum(qubits.get(i, 0) for i in range(self._n_bits))
            return n_ones / max(self._n_bits, 1)
        else:  # one_hot
            for i in range(self._n_bits):
                if qubits.get(i, 0) == 1:
                    return i / max(self._n_bits - 1, 1)
            return 0.0

    def qubits_needed(self, n_sc_values: int) -> int:
        """Total qubits needed to encode n SC values."""
        return n_sc_values * self._n_bits

    def encode_array(self, values: np.ndarray[Any, Any]) -> Dict[int, int]:
        """Encode array of SC values into a single qubit dict.

        Parameters
        ----------
        values : np.ndarray
            1D array of SC values.

        Returns
        -------
        dict[int, int]
            Global qubit index → binary value.
        """
        result: Dict[int, int] = {}
        for idx, v in enumerate(values):
            local = self.encode(float(v))
            for qi, val in local.items():
                result[idx * self._n_bits + qi] = val
        return result

n_levels property

Number of representable precision levels.

encode(sc_value)

Encode an SC probability as qubit configuration.

Parameters

sc_value : float SC value in [0, 1].

Returns

dict[int, int] Qubit index → binary value.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
def encode(self, sc_value: float) -> Dict[int, int]:
    """Encode an SC probability as qubit configuration.

    Parameters
    ----------
    sc_value : float
        SC value in [0, 1].

    Returns
    -------
    dict[int, int]
        Qubit index → binary value.
    """
    v = max(0.0, min(1.0, sc_value))

    if self._encoding == "binary":
        level = int(round(v * (2**self._n_bits - 1)))
        return {i: (level >> i) & 1 for i in range(self._n_bits)}
    elif self._encoding == "unary":
        n_ones = int(round(v * self._n_bits))
        return {i: (1 if i < n_ones else 0) for i in range(self._n_bits)}
    else:  # one_hot
        level = int(round(v * (self._n_bits - 1)))
        return {i: (1 if i == level else 0) for i in range(self._n_bits)}

decode(qubits)

Decode qubit configuration back to SC probability.

Parameters

qubits : dict[int, int] Qubit index → binary value.

Returns

float Reconstructed SC value in [0, 1].

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
def decode(self, qubits: Dict[int, int]) -> float:
    """Decode qubit configuration back to SC probability.

    Parameters
    ----------
    qubits : dict[int, int]
        Qubit index → binary value.

    Returns
    -------
    float
        Reconstructed SC value in [0, 1].
    """
    if self._encoding == "binary":
        level = sum(qubits.get(i, 0) << i for i in range(self._n_bits))
        return level / max(2**self._n_bits - 1, 1)
    elif self._encoding == "unary":
        n_ones = sum(qubits.get(i, 0) for i in range(self._n_bits))
        return n_ones / max(self._n_bits, 1)
    else:  # one_hot
        for i in range(self._n_bits):
            if qubits.get(i, 0) == 1:
                return i / max(self._n_bits - 1, 1)
        return 0.0

qubits_needed(n_sc_values)

Total qubits needed to encode n SC values.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1580
1581
1582
def qubits_needed(self, n_sc_values: int) -> int:
    """Total qubits needed to encode n SC values."""
    return n_sc_values * self._n_bits

encode_array(values)

Encode array of SC values into a single qubit dict.

Parameters

values : np.ndarray 1D array of SC values.

Returns

dict[int, int] Global qubit index → binary value.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
def encode_array(self, values: np.ndarray[Any, Any]) -> Dict[int, int]:
    """Encode array of SC values into a single qubit dict.

    Parameters
    ----------
    values : np.ndarray
        1D array of SC values.

    Returns
    -------
    dict[int, int]
        Global qubit index → binary value.
    """
    result: Dict[int, int] = {}
    for idx, v in enumerate(values):
        local = self.encode(float(v))
        for qi, val in local.items():
            result[idx * self._n_bits + qi] = val
    return result

ProblemDecomposer

Decompose large QUBO/Ising into sub-problems for QPU.

When a model exceeds QPU capacity, this class partitions it into smaller sub-problems that fit on hardware, solves each, then merges the results.

Parameters

max_subproblem_size : int Maximum qubits per sub-problem (default 64 for Chimera unit cell). overlap : int Number of shared qubits between partitions (default 4). n_iterations : int Number of decomposition-merge iterations (default 10).

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
class ProblemDecomposer:
    """Decompose large QUBO/Ising into sub-problems for QPU.

    When a model exceeds QPU capacity, this class partitions it
    into smaller sub-problems that fit on hardware, solves each,
    then merges the results.

    Parameters
    ----------
    max_subproblem_size : int
        Maximum qubits per sub-problem (default 64 for Chimera unit cell).
    overlap : int
        Number of shared qubits between partitions (default 4).
    n_iterations : int
        Number of decomposition-merge iterations (default 10).
    """

    def __init__(
        self,
        max_subproblem_size: int = 64,
        overlap: int = 4,
        n_iterations: int = 10,
    ) -> None:
        self._max_size = max_subproblem_size
        self._overlap = overlap
        self._n_iterations = n_iterations

    def decompose(self, model: IsingModel) -> list[IsingModel]:
        """Partition Ising model into sub-problems.

        Uses a greedy graph partitioning that keeps strongly-coupled
        qubits together.

        Parameters
        ----------
        model : IsingModel
            The model to decompose.

        Returns
        -------
        list[IsingModel]
            Sub-problems, each ≤ max_subproblem_size qubits.
        """
        if model.n_qubits <= self._max_size:
            return [model]

        # Build adjacency
        neighbors: Dict[int, list[int]] = {i: [] for i in range(model.n_qubits)}
        for i, j in model.J:
            neighbors[i].append(j)
            neighbors[j].append(i)

        # Greedy partitioning
        assigned: set[int] = set()
        partitions: list[list[int]] = []

        remaining = set(range(model.n_qubits))
        while remaining:
            seed = min(remaining)
            partition = [seed]
            assigned.add(seed)
            remaining.discard(seed)

            while len(partition) < self._max_size and remaining:
                # Find unassigned neighbor of current partition
                best = None
                best_score: float = -1.0
                for q in partition:
                    for n in neighbors.get(q, []):
                        if n in remaining:
                            score = abs(model.J.get((min(q, n), max(q, n)), 0.0))
                            if score > best_score:
                                best = n
                                best_score = score

                if best is None:
                    # No connected neighbors, take any remaining
                    best = min(remaining)

                partition.append(best)
                assigned.add(best)
                remaining.discard(best)

            partitions.append(partition)

        # Build sub-models
        sub_models: list[IsingModel] = []
        for part_idx, part_qubits in enumerate(partitions):
            qs = set(part_qubits)
            local_map = {q: i for i, q in enumerate(part_qubits)}

            h_sub = {local_map[q]: model.h.get(q, 0.0) for q in part_qubits}
            j_sub: Dict[tuple[int, int], float] = {}
            for (i, j), jij in model.J.items():
                if i in qs and j in qs:
                    li, lj = local_map[i], local_map[j]
                    a, b = min(li, lj), max(li, lj)
                    j_sub[(a, b)] = jij

            labels = {local_map[q]: model.qubit_labels.get(q, f"q{q}") for q in part_qubits}

            sub_models.append(
                IsingModel(
                    h=h_sub,
                    J=j_sub,
                    offset=0.0,
                    qubit_labels=labels,
                    n_qubits=len(part_qubits),
                    source=f"{model.source}_part{part_idx}",
                )
            )

        return sub_models

    def solve_decomposed(
        self,
        model: IsingModel,
        solver: SimulatedAnnealer | None = None,
    ) -> Dict[str, Any]:
        """Decompose, solve sub-problems, and merge.

        Parameters
        ----------
        model : IsingModel
            The full model.
        solver : SimulatedAnnealer | None
            Solver for sub-problems (default: new SA).

        Returns
        -------
        dict
            ``best_spins``, ``best_energy``, ``n_partitions``.
        """
        if solver is None:
            solver = SimulatedAnnealer(n_sweeps=1000, seed=42)

        sub_models = self.decompose(model)

        # Reconstruct global mapping
        global_spins: Dict[int, int] = {}
        # Initialize with +1
        for i in range(model.n_qubits):
            global_spins[i] = 1

        for _iteration in range(self._n_iterations):
            for sub in sub_models:
                result = solver.solve_ising(sub, num_reads=5)
                # Map back
                best = result["best_spins"]
                for local_q, spin in best.items():
                    # Find global index from label
                    label = sub.qubit_labels.get(local_q, "")
                    for gq, gl in model.qubit_labels.items():
                        if gl == label:
                            global_spins[gq] = spin
                            break

        return {
            "best_spins": global_spins,
            "best_energy": model.energy(global_spins),
            "n_partitions": len(sub_models),
            "n_iterations": self._n_iterations,
        }

decompose(model)

Partition Ising model into sub-problems.

Uses a greedy graph partitioning that keeps strongly-coupled qubits together.

Parameters

model : IsingModel The model to decompose.

Returns

list[IsingModel] Sub-problems, each ≤ max_subproblem_size qubits.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
def decompose(self, model: IsingModel) -> list[IsingModel]:
    """Partition Ising model into sub-problems.

    Uses a greedy graph partitioning that keeps strongly-coupled
    qubits together.

    Parameters
    ----------
    model : IsingModel
        The model to decompose.

    Returns
    -------
    list[IsingModel]
        Sub-problems, each ≤ max_subproblem_size qubits.
    """
    if model.n_qubits <= self._max_size:
        return [model]

    # Build adjacency
    neighbors: Dict[int, list[int]] = {i: [] for i in range(model.n_qubits)}
    for i, j in model.J:
        neighbors[i].append(j)
        neighbors[j].append(i)

    # Greedy partitioning
    assigned: set[int] = set()
    partitions: list[list[int]] = []

    remaining = set(range(model.n_qubits))
    while remaining:
        seed = min(remaining)
        partition = [seed]
        assigned.add(seed)
        remaining.discard(seed)

        while len(partition) < self._max_size and remaining:
            # Find unassigned neighbor of current partition
            best = None
            best_score: float = -1.0
            for q in partition:
                for n in neighbors.get(q, []):
                    if n in remaining:
                        score = abs(model.J.get((min(q, n), max(q, n)), 0.0))
                        if score > best_score:
                            best = n
                            best_score = score

            if best is None:
                # No connected neighbors, take any remaining
                best = min(remaining)

            partition.append(best)
            assigned.add(best)
            remaining.discard(best)

        partitions.append(partition)

    # Build sub-models
    sub_models: list[IsingModel] = []
    for part_idx, part_qubits in enumerate(partitions):
        qs = set(part_qubits)
        local_map = {q: i for i, q in enumerate(part_qubits)}

        h_sub = {local_map[q]: model.h.get(q, 0.0) for q in part_qubits}
        j_sub: Dict[tuple[int, int], float] = {}
        for (i, j), jij in model.J.items():
            if i in qs and j in qs:
                li, lj = local_map[i], local_map[j]
                a, b = min(li, lj), max(li, lj)
                j_sub[(a, b)] = jij

        labels = {local_map[q]: model.qubit_labels.get(q, f"q{q}") for q in part_qubits}

        sub_models.append(
            IsingModel(
                h=h_sub,
                J=j_sub,
                offset=0.0,
                qubit_labels=labels,
                n_qubits=len(part_qubits),
                source=f"{model.source}_part{part_idx}",
            )
        )

    return sub_models

solve_decomposed(model, solver=None)

Decompose, solve sub-problems, and merge.

Parameters

model : IsingModel The full model. solver : SimulatedAnnealer | None Solver for sub-problems (default: new SA).

Returns

dict best_spins, best_energy, n_partitions.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
def solve_decomposed(
    self,
    model: IsingModel,
    solver: SimulatedAnnealer | None = None,
) -> Dict[str, Any]:
    """Decompose, solve sub-problems, and merge.

    Parameters
    ----------
    model : IsingModel
        The full model.
    solver : SimulatedAnnealer | None
        Solver for sub-problems (default: new SA).

    Returns
    -------
    dict
        ``best_spins``, ``best_energy``, ``n_partitions``.
    """
    if solver is None:
        solver = SimulatedAnnealer(n_sweeps=1000, seed=42)

    sub_models = self.decompose(model)

    # Reconstruct global mapping
    global_spins: Dict[int, int] = {}
    # Initialize with +1
    for i in range(model.n_qubits):
        global_spins[i] = 1

    for _iteration in range(self._n_iterations):
        for sub in sub_models:
            result = solver.solve_ising(sub, num_reads=5)
            # Map back
            best = result["best_spins"]
            for local_q, spin in best.items():
                # Find global index from label
                label = sub.qubit_labels.get(local_q, "")
                for gq, gl in model.qubit_labels.items():
                    if gl == label:
                        global_spins[gq] = spin
                        break

    return {
        "best_spins": global_spins,
        "best_energy": model.energy(global_spins),
        "n_partitions": len(sub_models),
        "n_iterations": self._n_iterations,
    }

TTSAnalyzer

Time-to-solution quality metric for quantum annealing.

TTS measures the total time required to find the ground state with probability p_target, given: - p_success: probability of finding ground state in a single run - t_anneal: time per annealing run

TTS = t_anneal × (log(1 - p_target) / log(1 - p_success))

This is the standard benchmark metric used in D-Wave literature.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
class TTSAnalyzer:
    """Time-to-solution quality metric for quantum annealing.

    TTS measures the total time required to find the ground state
    with probability p_target, given:
    - p_success: probability of finding ground state in a single run
    - t_anneal: time per annealing run

    TTS = t_anneal × (log(1 - p_target) / log(1 - p_success))

    This is the standard benchmark metric used in D-Wave literature.
    """

    def compute(
        self,
        p_success: float,
        t_anneal_us: float,
        p_target: float = 0.99,
    ) -> Dict[str, float]:
        """Compute TTS metric.

        Parameters
        ----------
        p_success : float
            Probability of finding ground state per run.
        t_anneal_us : float
            Time per annealing run in microseconds.
        p_target : float
            Target cumulative success probability (default 0.99).

        Returns
        -------
        dict
            ``tts_us``, ``tts_ms``, ``n_runs_needed``,
            ``p_success``, ``p_target``.
        """
        if p_success <= 0:
            return {
                "tts_us": float("inf"),
                "tts_ms": float("inf"),
                "n_runs_needed": float("inf"),
                "p_success": 0.0,
                "p_target": p_target,
            }

        if p_success >= 1.0:
            return {
                "tts_us": t_anneal_us,
                "tts_ms": t_anneal_us / 1000.0,
                "n_runs_needed": 1.0,
                "p_success": 1.0,
                "p_target": p_target,
            }

        n_runs = math.log(1 - p_target) / math.log(1 - p_success)
        tts = t_anneal_us * n_runs

        return {
            "tts_us": tts,
            "tts_ms": tts / 1000.0,
            "n_runs_needed": n_runs,
            "p_success": p_success,
            "p_target": p_target,
        }

    def from_samples(
        self,
        energies: list[float],
        ground_state_energy: float,
        t_anneal_us: float = 20.0,
        tolerance: float = 1e-6,
        p_target: float = 0.99,
    ) -> Dict[str, float]:
        """Compute TTS from a set of sample energies.

        Parameters
        ----------
        energies : list[float]
            Observed sample energies.
        ground_state_energy : float
            Known or estimated ground state energy.
        t_anneal_us : float
            Time per annealing run.
        tolerance : float
            Energy tolerance for ground state match.
        p_target : float
            Target success probability.

        Returns
        -------
        dict
            TTS metrics.
        """
        n_gs = sum(1 for e in energies if abs(e - ground_state_energy) < tolerance)
        p_success = n_gs / max(len(energies), 1)
        return self.compute(p_success, t_anneal_us, p_target)

    def compare_solvers(
        self,
        results: Dict[str, Dict[str, Any]],
        ground_state_energy: float,
        tolerance: float = 1e-6,
    ) -> Dict[str, Dict[str, Any]]:
        """Compare TTS across multiple solvers.

        Parameters
        ----------
        results : dict
            Solver name → {energies, t_anneal_us}.
        ground_state_energy : float
            Known ground state energy.

        Returns
        -------
        dict
            Solver name → TTS metrics.
        """
        comparison: Dict[str, Dict[str, Any]] = {}
        for name, data in results.items():
            comparison[name] = self.from_samples(
                energies=data["energies"],
                ground_state_energy=ground_state_energy,
                t_anneal_us=data.get("t_anneal_us", 20.0),
                tolerance=tolerance,
            )
        return comparison

compute(p_success, t_anneal_us, p_target=0.99)

Compute TTS metric.

Parameters

p_success : float Probability of finding ground state per run. t_anneal_us : float Time per annealing run in microseconds. p_target : float Target cumulative success probability (default 0.99).

Returns

dict tts_us, tts_ms, n_runs_needed, p_success, p_target.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
def compute(
    self,
    p_success: float,
    t_anneal_us: float,
    p_target: float = 0.99,
) -> Dict[str, float]:
    """Compute TTS metric.

    Parameters
    ----------
    p_success : float
        Probability of finding ground state per run.
    t_anneal_us : float
        Time per annealing run in microseconds.
    p_target : float
        Target cumulative success probability (default 0.99).

    Returns
    -------
    dict
        ``tts_us``, ``tts_ms``, ``n_runs_needed``,
        ``p_success``, ``p_target``.
    """
    if p_success <= 0:
        return {
            "tts_us": float("inf"),
            "tts_ms": float("inf"),
            "n_runs_needed": float("inf"),
            "p_success": 0.0,
            "p_target": p_target,
        }

    if p_success >= 1.0:
        return {
            "tts_us": t_anneal_us,
            "tts_ms": t_anneal_us / 1000.0,
            "n_runs_needed": 1.0,
            "p_success": 1.0,
            "p_target": p_target,
        }

    n_runs = math.log(1 - p_target) / math.log(1 - p_success)
    tts = t_anneal_us * n_runs

    return {
        "tts_us": tts,
        "tts_ms": tts / 1000.0,
        "n_runs_needed": n_runs,
        "p_success": p_success,
        "p_target": p_target,
    }

from_samples(energies, ground_state_energy, t_anneal_us=20.0, tolerance=1e-06, p_target=0.99)

Compute TTS from a set of sample energies.

Parameters

energies : list[float] Observed sample energies. ground_state_energy : float Known or estimated ground state energy. t_anneal_us : float Time per annealing run. tolerance : float Energy tolerance for ground state match. p_target : float Target success probability.

Returns

dict TTS metrics.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
def from_samples(
    self,
    energies: list[float],
    ground_state_energy: float,
    t_anneal_us: float = 20.0,
    tolerance: float = 1e-6,
    p_target: float = 0.99,
) -> Dict[str, float]:
    """Compute TTS from a set of sample energies.

    Parameters
    ----------
    energies : list[float]
        Observed sample energies.
    ground_state_energy : float
        Known or estimated ground state energy.
    t_anneal_us : float
        Time per annealing run.
    tolerance : float
        Energy tolerance for ground state match.
    p_target : float
        Target success probability.

    Returns
    -------
    dict
        TTS metrics.
    """
    n_gs = sum(1 for e in energies if abs(e - ground_state_energy) < tolerance)
    p_success = n_gs / max(len(energies), 1)
    return self.compute(p_success, t_anneal_us, p_target)

compare_solvers(results, ground_state_energy, tolerance=1e-06)

Compare TTS across multiple solvers.

Parameters

results : dict Solver name → {energies, t_anneal_us}. ground_state_energy : float Known ground state energy.

Returns

dict Solver name → TTS metrics.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
def compare_solvers(
    self,
    results: Dict[str, Dict[str, Any]],
    ground_state_energy: float,
    tolerance: float = 1e-6,
) -> Dict[str, Dict[str, Any]]:
    """Compare TTS across multiple solvers.

    Parameters
    ----------
    results : dict
        Solver name → {energies, t_anneal_us}.
    ground_state_energy : float
        Known ground state energy.

    Returns
    -------
    dict
        Solver name → TTS metrics.
    """
    comparison: Dict[str, Dict[str, Any]] = {}
    for name, data in results.items():
        comparison[name] = self.from_samples(
            energies=data["energies"],
            ground_state_energy=ground_state_energy,
            t_anneal_us=data.get("t_anneal_us", 20.0),
            tolerance=tolerance,
        )
    return comparison

export_ising_json(model, path)

Export Ising model to JSON format.

Parameters

model : IsingModel The model to export. path : str Output file path.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
def export_ising_json(model: IsingModel, path: str) -> None:
    """Export Ising model to JSON format.

    Parameters
    ----------
    model : IsingModel
        The model to export.
    path : str
        Output file path.
    """
    data = {
        "type": "ising",
        "n_qubits": model.n_qubits,
        "source": model.source,
        "offset": model.offset,
        "h": {str(k): v for k, v in model.h.items()},
        "J": {f"{i},{j}": v for (i, j), v in model.J.items()},
        "qubit_labels": {str(k): v for k, v in model.qubit_labels.items()},
    }
    with open(path, "w") as f:
        json.dump(data, f, indent=2)

export_qubo_json(model, path)

Export QUBO model to JSON format.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
854
855
856
857
858
859
860
861
862
863
864
865
def export_qubo_json(model: QUBOModel, path: str) -> None:
    """Export QUBO model to JSON format."""
    data = {
        "type": "qubo",
        "n_qubits": model.n_qubits,
        "source": model.source,
        "offset": model.offset,
        "Q": {f"{i},{j}": v for (i, j), v in model.Q.items()},
        "qubit_labels": {str(k): v for k, v in model.qubit_labels.items()},
    }
    with open(path, "w") as f:
        json.dump(data, f, indent=2)

export_bqm(model)

Export Ising model as a dimod BinaryQuadraticModel.

Returns

dimod.BinaryQuadraticModel or None BQM object, or None if dimod is not installed.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
868
869
870
871
872
873
874
875
876
877
878
def export_bqm(model: IsingModel) -> Any:
    """Export Ising model as a dimod BinaryQuadraticModel.

    Returns
    -------
    dimod.BinaryQuadraticModel or None
        BQM object, or None if dimod is not installed.
    """
    if not _HAS_DIMOD:
        return None
    return dimod.BinaryQuadraticModel(model.h, model.J, model.offset, "SPIN")

visualize_ising(model)

Generate ASCII visualization of an Ising model.

Returns

str Multi-line ASCII representation.

Source code in src/sc_neurocore/bridges/quantum_annealing.py
Python
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
def visualize_ising(model: IsingModel) -> str:
    """Generate ASCII visualization of an Ising model.

    Returns
    -------
    str
        Multi-line ASCII representation.
    """
    lines: list[str] = [
        f"┌{'=' * 50}┐",
        f"│ Ising Model: {model.source:<34} │",
        f"│ Qubits: {model.n_qubits:<4}  Couplers: {len(model.J):<5}          │",
        f"│ Offset: {model.offset:<40.4f} │",
        f"└{'=' * 50}┘",
        "",
        "  Biases (h):",
    ]

    for i in sorted(model.h.keys()):
        label = model.qubit_labels.get(i, f"q{i}")
        bar_len = int(abs(model.h[i]) * 20)
        bar = "█" * min(bar_len, 20)
        sign = "+" if model.h[i] >= 0 else "-"
        lines.append(f"    {label:>8}: {sign}{bar:<20} ({model.h[i]:+.4f})")

    lines.append("")
    lines.append("  Couplings (J):")
    for i, j in sorted(model.J.keys()):
        li = model.qubit_labels.get(i, f"q{i}")
        lj = model.qubit_labels.get(j, f"q{j}")
        jij = model.J[(i, j)]
        kind = "ferro" if jij < 0 else "anti"
        lines.append(f"    {li:>8} ─── {lj:<8}: {jij:+.4f} [{kind}]")

    return "\n".join(lines)

7. Performance benchmarks

Output from bench_quantum_annealing.py

Text Only
SC-NeuroCore Quantum Annealing Benchmark
Rust backend available: False


============================================================
  BENCHMARK: Ising Energy Evaluation
============================================================
     N    Python (µs)      Rust (µs)    Speedup
------------------------------------------------------------
    10           10.5            N/A        N/A
    20           42.1            N/A        N/A
    50          190.1            N/A        N/A
   100          470.3            N/A        N/A

============================================================
  BENCHMARK: Batch Energy (10000 configurations)
============================================================
     N    Python (ms)      Rust (ms)    Speedup
------------------------------------------------------------
    10           49.2            N/A        N/A
    20          190.7            N/A        N/A
    50         1175.1            N/A        N/A

============================================================
  BENCHMARK: Simulated Annealing (1000 sweeps × 10 reads)
============================================================
     N    Python (ms)      Rust (ms)    Speedup
------------------------------------------------------------
    10          235.8            N/A        N/A
    20         1317.8            N/A        N/A
    50        17721.0            N/A        N/A

============================================================
  BENCHMARK COMPLETE
============================================================