Transformers¶
SC-native transformer blocks built on stochastic attention.
StochasticTransformerBlock— S-Former: spiking transformer with per-head stochastic attention over disjoint feature subspaces. Architecture: Input -> SC Multi-Head Attention -> Add & Norm -> SC Dense FF -> Add & Norm -> Output.d_modelmust be divisible byn_heads; each head ownsd_model / n_headscontiguous channels. Inputs must be finite one- or two-dimensional arrays with trailing dimensiond_model.
Python
from sc_neurocore.transformers import StochasticTransformerBlock
block = StochasticTransformerBlock(d_model=64, n_heads=4, length=256)
output = block.forward(input_sequence)
See Tutorial 54: Spiking Transformers.
sc_neurocore.transformers.block
¶
StochasticTransformerBlock
dataclass
¶
Spiking Transformer Block (S-Former). Structure: Input -> Multi-Head Attention -> Add & Norm -> Feed Forward -> Add & Norm -> Output
Source code in src/sc_neurocore/transformers/block.py
| Python | |
|---|---|
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 | |
forward(x)
¶
x: (d_model,) or (Sequence_Length, d_model). Returns same shape.
Source code in src/sc_neurocore/transformers/block.py
| Python | |
|---|---|
66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 | |