| One way to try to make sense of the complexities of our visual system is to hypothesize that evolution has developed nearly optimal solutions to the problems organisms face in the environment. In this thesis, we study two such principles of optimality for the visual code.; In the first half of this dissertation, we consider the principle of decorrelation. Influential theories assert that the center-surround receptive fields of retinal neurons remove spatial correlations present in the visual world. It has been proposed that this decorrelation serves to maximize information transmission to the brain by avoiding transfer of redundant information through optic nerve fibers of limited capacity. While these theories successfully account for several aspects of visual perception, the notion that the outputs of the retina are less correlated than its inputs has never been directly tested at the site of the putative information bottleneck, the optic nerve. We presented visual stimuli with naturalistic image correlations to the salamander retina while recording responses of many retinal ganglion cells using a microelectrode array. The output signals of ganglion cells are indeed decorrelated compared to the visual input, but the receptive fields are only partly responsible. Much of the decorrelation is due to the nonlinear processing by neurons rather than the linear receptive fields. This form of decorrelation dramatically limits information transmission. Instead of improving coding efficiency we show that the nonlinearity is well suited to enable a combinatorial code or to signal robust stimulus features.; In the second half of this dissertation, we develop an ideal observer model for the task of discriminating between two small stimuli which move along an unknown retinal trajectory induced by fixational eye movements. The ideal observer is provided with the responses of a model retina and guesses the stimulus identity based on the maximum likelihood rule, which involves sums over all random walk trajectories. These sums can be implemented in a biologically plausible way. The necessary ingredients are: neurons modeled as a cascade of a linear filter followed by a static nonlinearity, a recurrent network with additive and multiplicative interactions between neurons, and divisive global inhibition. This architecture implements Bayesian inference by representing likelihoods as neural activity which can then diffuse through the recurrent network and modulate the influence of later information. We also develop approximation methods for characterizing the performance of the ideal observer. We find that the effect of positional uncertainty is essentially to slow the acquisition of signal. The time scaling is related to the size of the uncertainty region, which is in turn related to both the signal strength and the statistics of the fixational eye movements. These results imply that localization cues should determine the slope of the performance curve in time. |