0

arXiv:2601.00449v1 Announce Type: cross
Abstract: Advances in artificial intelligence (AI) and deep learning have raised concerns about its increasing energy consumption, while demand for deploying AI in mobile devices and machines at the edge is growing. Binary neural networks (BNNs) have recently gained attention as energy and memory efficient models suitable for resource constrained environments; however, training BNNs exactly is computationally challenging because of its discrete characteristics. Recent work proposing a framework for training BNNs based on quadratic unconstrained binary optimisation (QUBO) and progress in the design of Ising machines for solving QUBO problems suggest a potential path to efficiently optimising discrete neural networks. In this work, we extend existing QUBO models for training BNNs to accommodate arbitrary network topologies and propose two novel methods for regularisation. The first method maximises neuron margins biasing the training process toward parameter configurations that yield larger pre-activation magnitudes. The second method employs a dropout-inspired iterative scheme in which reduced subnetworks are trained and used to adjust linear penalties on network parameters. We apply the proposed QUBO formulation to a small binary image classification problem and conduct computational experiments on a GPU-based Ising machine. The numerical results indicate that the proposed regularisation terms modify training behaviour and yield improvements in classification accuracy on data not present in the training set.