0

arXiv:2601.04983v1 Announce Type: cross
Abstract: Scaling quantum computers requires tight integration of cryogenic control electronics with quantum processors, where Digital-to-Analog Converters (DACs) face severe power and area constraints. We investigate quantum neural network (QNN) training and inference under finite DAC resolution constraints across various DAC resolutions. Pre-trained QNNs achieve accuracy nearly indistinguishable from infinite-precision baselines when deployed on quantum systems with 6-bit DAC control electronics, exhibiting an elbow curve with diminishing returns beyond 4 bits. However, training under quantization reveals gradient deadlock below 12-bit resolution as gradient magnitudes fall below quantization step sizes. We introduce temperature-controlled stochasticity that overcomes this through probabilistic parameter updates, enabling successful training at 4-10 bit resolutions that remarkably matches or exceeds infinite-precision baseline performance. Our findings demonstrate that low-resolution control electronics need not compromise QML performance, enabling significant power and area reduction in cryogenic control systems for practical deployment as quantum hardware scales.