Abstract
Decision confidence is a forecast about the probability that a decision will be correct. Confidence can be framed as an objective mathematical quantity the Bayesian posterior probability, providing a formal definition of statistical decision confidence. Here we use this definition as a starting point to develop a normative statistical framework for decision confidence. We analytically prove interrelations between statistical decision confidence and other observable decision measures. Among these is a counterintuitive property of confidence that the lowest average confidence occurs when classifiers err in the presence of the strongest evidence. These results lay the foundations for a mathematically rigorous treatment of decision confidence that can lead to a common framework for understanding confidence across different research domains, from human behavior to neural representations.