Artificial Intelligence

Quantifying Uncertainty in AI

Research toward reliable uncertainty quantification

Project: Quantifying Uncertainty in AI

Uncertainty quantification in AI models is a foundational challenge in modern data analysis.

Scientists are exploring how AI can accurately estimate uncertainty in scientific calculations, like those used in astrophysics. Current AI-based methods struggle to reliably assess this uncertainty, particularly when dealing with noisy data or complex problems. This highlights the need for improved calibration techniques to ensure AI models provide meaningful and trustworthy uncertainty estimates. At Fermilab, scientists study how error propagates through data to AI methods, and how to quantify those impacts.

Resources

nips.cc/virtual/2024/105809 (conference talk)

arxiv.org/abs/2506.03037 (preprint)