Neuromorphic processors for edge inference.
We design spiking neural network processors from scratch in Verilog and validate them on real FPGA hardware. Event-driven cores that fire only on input, learn on-chip through programmable plasticity rules, and run at a fraction of the power of conventional accelerators. Four generations designed. Two open sourced.
Loihi 1 feature parity. 14-opcode microcode learning engine, barrier-synchronised mesh network-on-chip with multi-chip serial links, triple RV32IMF RISC-V cluster. Validated on AWS F2 (VU47P) and Kria K26.
Programmable microcode neurons with Loihi 2 parity. Graded spike transmission, eligibility traces, reward-modulated plasticity. 28/28 hardware tests on AWS F2. SDK with CPU, GPU, UART, and PCIe backends.
Time-division multiplexing, async hybrid NoC with adaptive routing, 4 parallel learning threads, hardware short-term plasticity and homeostatic scaling. NeurOS virtualisation scheduling 680+ concurrent networks. 19/19 hardware tests on AWS F2.
Dual-chiplet architecture with 4.19M physical neurons, expandable to 134M via 32-context TDM. 16×16 Spike Tensor Core, 8-head spiking attention with KV cache, 8 synapse formats including KAN B-spline. Hardware backpropagation, hyperdimensional computing, Hopfield memory, post-quantum security.
3,229 simulation tests passing. N4-Edge variant runs at 2.6% LUT utilisation and 0.378 W total on Kria K26.
All from FPGA-validated builds. Trained on GPU, quantized to 16-bit, deployed on hardware.
Open to research collaboration, FPGA contract work, and partnerships.