0
From Factoid Questions to Data Product Requests: Benchmarking Data Product Discovery over Tables and Text
arXiv:2510.21737v3 Announce Type: replace
Abstract: Data products are reusable, self-contained assets designed for specific business use cases. Automating their discovery and generation is of great industry interest, as it enables discovery in large data lakes and supports analytical Data Product Requests (DPRs). Currently, there is no benchmark established specifically for data product discovery. Existing datasets focus on answering single factoid questions over individual tables rather than collecting multiple data assets for broader, coherent products. To address this gap, we introduce Data Product Benchmark, the first user-request-driven data product benchmark over hybrid table-text corpora.
Our framework systematically repurposes existing table-text QA datasets by clustering related tables and passages into coherent data products, generating professional-level analytical requests that span both data sources, and validating benchmark quality through multi-LLM evaluation. Data Product Benchmark preserves full provenance while producing actionable, analyst-like data product requests. Baseline experiments with hybrid retrieval methods establish the feasibility of DPR evaluation, reveal current limitations, and point to new opportunities for automatic data product discovery research.
Code and datasets are available at: Dataset: https://huggingface.co/datasets/ibm-research/data-product-benchmark Code: https://github.com/ibm/data-product-benchmark
Abstract: Data products are reusable, self-contained assets designed for specific business use cases. Automating their discovery and generation is of great industry interest, as it enables discovery in large data lakes and supports analytical Data Product Requests (DPRs). Currently, there is no benchmark established specifically for data product discovery. Existing datasets focus on answering single factoid questions over individual tables rather than collecting multiple data assets for broader, coherent products. To address this gap, we introduce Data Product Benchmark, the first user-request-driven data product benchmark over hybrid table-text corpora.
Our framework systematically repurposes existing table-text QA datasets by clustering related tables and passages into coherent data products, generating professional-level analytical requests that span both data sources, and validating benchmark quality through multi-LLM evaluation. Data Product Benchmark preserves full provenance while producing actionable, analyst-like data product requests. Baseline experiments with hybrid retrieval methods establish the feasibility of DPR evaluation, reveal current limitations, and point to new opportunities for automatic data product discovery research.
Code and datasets are available at: Dataset: https://huggingface.co/datasets/ibm-research/data-product-benchmark Code: https://github.com/ibm/data-product-benchmark