317

arXiv:2512.17058v1 Announce Type: new
Abstract: We prove the last remaining implication allowing to claim the equivalence of the following conditions for a complete separable metric space $X$:
(1) The $k$-nearest neighbour classifier is (weakly) universally consistent in $X$, (2) The strong Lebes…
332

arXiv:2512.17484v1 Announce Type: new
Abstract: Containers conveniently represent a wide class of inductive data types. Their derivatives compute representations of types of one-hole contexts, useful for implementing tree-traversal algorithms. In the category of containers and cartesian morphisms, …
234

arXiv:2505.14886v2 Announce Type: replace
Abstract: Winning competitive debates requires sophisticated reasoning and argument skills. There are unique challenges in the competitive debate: (1) The time constraints force debaters to make strategic choices about which points to pursue rather than cov…
220

arXiv:2512.16969v1 Announce Type: new
Abstract: Despite advances in scientific AI, a coherent framework for Scientific General Intelligence (SGI)-the ability to autonomously conceive, investigate, and reason across scientific domains-remains lacking. We present an operational SGI definition grounde…
220

arXiv:2510.08461v2 Announce Type: replace
Abstract: We introduce a refinement-based Christoffel sampling (RCS) algorithm for least squares approximation in the span of a given, generally non-orthogonal set of functions $\Phi_n = \{\phi_1, \dots, \phi_n\}$. A standard sampling strategy for this prob…
210

arXiv:2512.17386v1 Announce Type: new
Abstract: Deterministic auctions are attractive in practice due to their transparency, simplicity, and ease of implementation, motivating a sharp understanding of when they match randomized designs. We study deterministic implementation in single-item auctions …
219

arXiv:2512.17077v1 Announce Type: new
Abstract: Diffusion Large Language Models (dLLMs) have emerged as a promising alternative to Autoregressive Models (ARMs), utilizing parallel decoding to overcome sequential bottlenecks. However, existing research focuses primarily on kernel-level optimizations…
212

arXiv:2412.16827v2 Announce Type: replace
Abstract: As intelligent reflecting surface (IRS) has emerged as a new and promising technology capable of configuring the wireless environment favorably, channel estimation for IRS-assisted multiple-input multiple-output (MIMO) systems has garnered extensi…
212

arXiv:2511.21417v2 Announce Type: replace
Abstract: In pseudo-boolean solving the currently most successful unit propagation strategy is a hybrid mode combining the watched literal scheme with the counting method. This short paper introduces new heuristics for this hybrid decision, which are able t…
120

arXiv:2512.17607v1 Announce Type: new
Abstract: Physics-informed neural networks (PINNs) have recently emerged as a prominent paradigm for solving partial differential equations (PDEs), yet their training strategies remain underexplored. While hard prioritization methods inspired by finite element …
111

arXiv:2512.17017v1 Announce Type: new
Abstract: Working with abstract information often relies on static, symbolic representations that constrain exploration. We introduce Explorable Ideas, a framework that externalizes abstract concepts into explorable environments where physical navigation coordi…
119

arXiv:2512.17146v1 Announce Type: new
Abstract: Genomic Foundation Models (GFMs), such as Evolutionary Scale Modeling (ESM), have demonstrated remarkable success in variant effect prediction. However, their security and robustness under adversarial manipulation remain largely unexplored. To address…
109

arXiv:2512.08554v2 Announce Type: replace
Abstract: In this article, we give two extended space formulations, respectively, for the induced tree and path polytopes of chordal graphs with vertex and edge variables.
These formulations are obtained by proving that the induced tree and path extended …
109

arXiv:2510.16882v2 Announce Type: replace
Abstract: Supervised fine-tuning (SFT) is a commonly used technique to adapt large language models (LLMs) to downstream tasks. In practice, SFT on a full dataset is computationally expensive and sometimes suffers from overfitting or bias amplification. This…