Score: 0

The Native Spiking Microarchitecture: From Iontronic Primitives to Bit-Exact FP8 Arithmetic

Published: December 8, 2025 | arXiv ID: 2512.07724v1

By: Zhengzheng Tang

Potential Business Impact:

Makes AI faster and more accurate.

Business Areas:
Field-Programmable Gate Array (FPGA) Hardware

The 2025 Nobel Prize in Chemistry for Metal-Organic Frameworks (MOFs) and recent breakthroughs by Huanting Wang's team at Monash University establish angstrom-scale channels as promising post-silicon substrates with native integrate-and-fire (IF) dynamics. However, utilizing these stochastic, analog materials for deterministic, bit-exact AI workloads (e.g., FP8) remains a paradox. Existing neuromorphic methods often settle for approximation, failing Transformer precision standards. To traverse the gap "from stochastic ions to deterministic floats," we propose a Native Spiking Microarchitecture. Treating noisy neurons as logic primitives, we introduce a Spatial Combinational Pipeline and a Sticky-Extra Correction mechanism. Validation across all 16,129 FP8 pairs confirms 100% bit-exact alignment with PyTorch. Crucially, our architecture reduces Linear layer latency to O(log N), yielding a 17x speedup. Physical simulations further demonstrate robustness against extreme membrane leakage (beta approx 0.01), effectively immunizing the system against the stochastic nature of the hardware.

Country of Origin
🇺🇸 United States

Page Count
10 pages

Category
Computer Science:
Emerging Technologies