Font Size: a A A

Computational Models Of Visual Adaptation And Color Constancy And Applications

Posted on:2018-12-16Degree:DoctorType:Dissertation
Country:ChinaCandidate:S B GaoFull Text:PDF
GTID:1318330515451761Subject:Biomedical engineering
Abstract/Summary:PDF Full Text Request
In this work,we develop the new models of human vision.Visual constancy and adaptation are the process by which the neurons in the sensory pathways of the brain(e.g.sight,smell)adapt their signals to the constantly changing world that we encounter.This adaptive process is thought to be important because it allows neuron– which has a limited number of discrete activity states–to encode more of the world,thereby allowing us to see,feel,smell and hear over a greater range.Visual constancy and adaptation effects are ubiquitous,but we lack a general framework in which to understand them.Based on the single-illuminant hypothesis,our first work of exploring the color constancy(CC)proposes a new CC model by imitating the functional properties of the HVS from the single-opponent cells in retina to the double-opponent(DO)cells in V1 and the possible neurons in higher visual cortex.The idea behind the proposed double-opponency based CC model originates from the substantial observation that the color distribution of the responses of DO cells to the color-biased images coincides well with the vector denoting the light source color.Our second work about the single-illuminant hypothesis based CC is that a simple yet robust invariance called achromatic-estimation-invariance((6(8?)has been found to help estimate the illuminant color for CC.This invariance is based on a key observation stating that estimation invariance of illuminant color components across different channels of a scene are approximately equal to each other for both indoor and outdoor scenes,derived from the statistical experiments on three synthetic datasets and four real-world datasets.Based on this (6(8? invariance and the classical image formation model,we propose a simple method to obtain the illuminant color by computing the ratio of the average of all pixels over the color-biased scene to that over the roughly recovered scene(obtained by local normalization in this work)in each channel.Our third work takes the multi illuminant-based color constancy(MCC)as a challenging task.In this part,we propose a novel model inspired by the mechanisms of HVS to estimate the multi illuminant from a color biased image.The model grounds on two hypotheses that respectively donate the bottom-up and top-down mechanisms of visual system.The physical motivation for bottom-up based processing is that we statistically find that the bright and dark image areas play different roles on encoding the illuminant.Moreover,the pure bottom-up mechanisms are not enough to handle the problem of the color bias introduced by convolving the large colorful regions.Thus,we introduce the top-down constraints by learning a color transformation to further improve the performance of bottom-up based MCC.Our fourth work contributes to CC by discounting the effects of scene illuminant and camera spectral sensitivity(CSS)at the same time.Our work first studies the CSS effect on illuminant estimation arising in the inter-dataset-based CC(inter-CC),i.e.,training a CC model on one dataset and then testing on another dataset captured by a distinct CSS.We show the clear degradation of existing CC models for inter-CC application.Then a simple way is proposed to overcome such degradation by first learning quickly a transform matrix between the two distinct CSSs(CSS-1 and CSS-2).The learned matrix is then used to convert the data rendered under CSS-1 into CSS-2,and then train and apply the CC model on the color biased images under CSS-2,without the need of burdensome acquiring of training set under CSS-2.We suggest that by taking the CSS effect into account,it is more likely to obtain the truly color constant images invariant to the changes of both illuminant and camera sensors.A central hypothesis about sensory information processing is that brain may use bayesian rule to respond outside stimuli by exploiting both the recent experience(e.g.,expectation)and the current providence(e.g.,likelihood).In the last part of this thesis,we extend an influential statistical model based on the spatial interactions between the center and the surround receptive fields of the neuron.To our surprise,the spatially statistical model can explain all the results in which how neuron outputs the typical adapted responses as observed in neurophysiological experiments,if we assume that the temporal adaptation modifies the ‘state' of the imitated neural network.
Keywords/Search Tags:color constancy, natural image statistics, visual adaptation, illuminant estimation, color image processing
PDF Full Text Request
Related items