222
Do You "Trust" This Visualization? An Inventory to Measure Trust in Visualizations
arXiv:2503.17670v2 Announce Type: replace
Abstract: Trust plays a critical role in visual data communication and decision-making, yet existing visualization research employs varied trust measures, making it challenging to compare and synthesize findings across studies. In this work, we first took a bottom-up, data-driven approach to understand what visualization readers mean when they say they "trust" a visualization. We compiled and adapted a broad set of trust-related statements from existing inventories and collected responses to visualizations with varying degrees of trustworthiness. Through exploratory factor analysis, we derived an operational definition of trust in visualizations. Our findings indicate that people perceive a trustworthy visualization as one that presents credible information and is comprehensible and usable. Building on this insight, we developed an eight-item inventory: four core items measuring trust in visualizations and four optional items controlling for individual differences in baseline trust tendency. We established the inventory's internal consistency reliability using McDonald's omega, confirmed its content validity by demonstrating alignment with theoretically-grounded trust dimensions, and validated its criterion validity through two trust games with real-world stakes. Finally, we illustrate how this standardized inventory can be applied across diverse visualization research contexts. Utilizing our inventory, future research can examine how design choices, tasks, and domains influence trust, and how to foster appropriate trusting behavior in human-data interactions.
Abstract: Trust plays a critical role in visual data communication and decision-making, yet existing visualization research employs varied trust measures, making it challenging to compare and synthesize findings across studies. In this work, we first took a bottom-up, data-driven approach to understand what visualization readers mean when they say they "trust" a visualization. We compiled and adapted a broad set of trust-related statements from existing inventories and collected responses to visualizations with varying degrees of trustworthiness. Through exploratory factor analysis, we derived an operational definition of trust in visualizations. Our findings indicate that people perceive a trustworthy visualization as one that presents credible information and is comprehensible and usable. Building on this insight, we developed an eight-item inventory: four core items measuring trust in visualizations and four optional items controlling for individual differences in baseline trust tendency. We established the inventory's internal consistency reliability using McDonald's omega, confirmed its content validity by demonstrating alignment with theoretically-grounded trust dimensions, and validated its criterion validity through two trust games with real-world stakes. Finally, we illustrate how this standardized inventory can be applied across diverse visualization research contexts. Utilizing our inventory, future research can examine how design choices, tasks, and domains influence trust, and how to foster appropriate trusting behavior in human-data interactions.
No comments yet.