| A perceptual decision is often accompanied by a subjective feeling of confidence. Because humans are able to easily report this feeling in a laboratory setting, confidence reports have long been objects of study. However, the computations underlying confidence reports are not well understood.;It has been proposed that confidence in categorization tasks should be defined as the observer's estimated probability of being correct. This definition extends Bayesian decision theory so that it describes confidence reports as well as decisions. Although this definition is elegant, the notion that confidence reports are Bayesian is a hypothesis rather than an established fact. In this dissertation, our aim is to test that hypothesis, which we call the Bayesian confidence hypothesis (BCH).;We find that a proposed approach to determining the computational origins of confidence is flawed. Some authors have proposed that one way to determine whether confidence is Bayesian is to derive qualitative signatures of Bayesian confidence, and then see whether they are present in behavioral or neural data. We analyze some of these proposed signatures and find that they are less useful than they might have seemed. Specifically, they are neither necessary nor sufficient signatures of Bayesian confidence, which means that observation of (or failure to observe) these signatures provides an uncertain amount of evidence for (or against) the BCH. There has been a confusion in the literature about a second possible signature. We find no evidence that this second signature is ever expected under Bayesian confidence. Finally, the application of these signatures is a qualitative exercise because it may not always be clear whether data displays a signature, especially noisy data. Our analysis of the signatures leads us to conclude that the most powerful way to test the BCH is by using quantitative model comparison.;We test human subjects on a set of binary categorization tasks designed to distinguish Bayesian models of confidence from other plausible models. In all experiments, the primary variable of interest to the observer was the orientation of a stimulus.;In one set of experiments, we induce sensory uncertainty by manipulating properties of the stimulus, such as contrast. We find that subjects take their sensory uncertainty into account, and that confidence appears qualitatively Bayesian. Quantitatively, however, heuristic models provide a much better fit to the data. Our conclusions are robust to variants of both the experiment and the Bayesian models.;In another experiment, we induce sensory uncertainty by manipulating the subjects' attention. As in the previous set of experiments, we find that confidence reports are qualitatively Bayesian. In this experiment, we are unable to distinguish the Bayesian model from the heuristic models.;Finally, we describe an exploratory analysis intended to explain why confidence reports might not be Bayesian. We trained feedforward neural networks on our tasks as if they were naive human subjects and fit our behavioral models to the data produced by these trained networks. We find that the same heuristic models that fit our human data well also fit the network-produced data. We suggest a future research program in which neural network behavior is compared to human behavior on the basis of model rankings. |