0
Memory-Guided Unified Hardware Accelerator for Mixed-Precision Scientific Computing
arXiv:2601.04476v1 Announce Type: new
Abstract: Recent hardware acceleration advances have enabled powerful specialized accelerators for finite element computations, spiking neural network inference, and sparse tensor operations. However, existing approaches face fundamental limitations: (1) finite element methods lack comprehensive rounding error analysis for reduced-precision implementations and use fixed precision assignment strategies that cannot adapt to varying numerical conditioning; (2) spiking neural network accelerators cannot handle non-spike operations and suffer from bit-width escalation as network depth increases; and (3) FPGA tensor accelerators optimize only for dense computations while requiring manual configuration for each sparsity pattern. To address these challenges, we introduce \textbf{Memory-Guided Unified Hardware Accelerator for Mixed-Precision Scientific Computing}, a novel framework that integrates three enhanced modules with memory-guided adaptation for efficient mixed-workload processing on unified platforms. Our approach employs memory-guided precision selection to overcome fixed precision limitations, integrates experience-driven bit-width management and dynamic parallelism adaptation for enhanced spiking neural network acceleration, and introduces curriculum learning for automatic sparsity pattern discovery. Extensive experiments on FEniCS, COMSOL, ANSYS benchmarks, MNIST, CIFAR-10, CIFAR-100, DVS-Gesture datasets, and COCO 2017 demonstrate 2.8\% improvement in numerical accuracy, 47\% throughput increase, 34\% energy reduction, and 45-65\% throughput improvement compared to specialized accelerators. Our work enables unified processing of finite element methods, spiking neural networks, and sparse computations on a single platform while eliminating data transfer overhead between separate units.
Abstract: Recent hardware acceleration advances have enabled powerful specialized accelerators for finite element computations, spiking neural network inference, and sparse tensor operations. However, existing approaches face fundamental limitations: (1) finite element methods lack comprehensive rounding error analysis for reduced-precision implementations and use fixed precision assignment strategies that cannot adapt to varying numerical conditioning; (2) spiking neural network accelerators cannot handle non-spike operations and suffer from bit-width escalation as network depth increases; and (3) FPGA tensor accelerators optimize only for dense computations while requiring manual configuration for each sparsity pattern. To address these challenges, we introduce \textbf{Memory-Guided Unified Hardware Accelerator for Mixed-Precision Scientific Computing}, a novel framework that integrates three enhanced modules with memory-guided adaptation for efficient mixed-workload processing on unified platforms. Our approach employs memory-guided precision selection to overcome fixed precision limitations, integrates experience-driven bit-width management and dynamic parallelism adaptation for enhanced spiking neural network acceleration, and introduces curriculum learning for automatic sparsity pattern discovery. Extensive experiments on FEniCS, COMSOL, ANSYS benchmarks, MNIST, CIFAR-10, CIFAR-100, DVS-Gesture datasets, and COCO 2017 demonstrate 2.8\% improvement in numerical accuracy, 47\% throughput increase, 34\% energy reduction, and 45-65\% throughput improvement compared to specialized accelerators. Our work enables unified processing of finite element methods, spiking neural networks, and sparse computations on a single platform while eliminating data transfer overhead between separate units.