Font Size: a A A

Neural models of multi-scale image completion and of featural bias during attentive memory search

Posted on:2010-02-05Degree:Ph.DType:Dissertation
University:Boston UniversityCandidate:Gaddam, Sai ChaitanyaFull Text:PDF
GTID:1448390002978712Subject:Biology
Abstract/Summary:
This dissertation develops neural network models for vision and recognition, and tests model performance on large-scale images.;The first project introduces CONFIGR (CONtour FIgure GRound), a computational model based on principles of biological vision. Within an integrated vision/recognition system, CONFIGR posits an initial recognition stage which identifies figure pixels from spatially local input information. The resulting figure is fed back to the "early vision" stage for long-range completion via filling-in. In the CONFIGR algorithm, the smallest independent image unit is the visible pixel. Once the pixel size is fixed, the entire algorithm is fully determined, with no additional parameter choices.;Multi-scale simulations illustrate the vision/recognition system. The model balances filling-in as figure against complementary filling-in as ground, which blocks spurious figure completions. Originally designed to fill-in missing contours in an incomplete image such as a dashed line, the same CONFIGR system connects sparse dots, and unifies occluded objects from pieces locally identified as figure in the initial recognition stage. The model self-scales its completion distances, filling-in across gaps of any length, where unimpeded, while limiting connections among dense image-figure pixel groups that already have intrinsic form. Long-range image completion promises to play an important role in adaptive processors that reconstruct images from highly compressed video and still camera images.;The second project considers a problem faced by online learning systems, which may be presented with input patterns only once during training. In this situation, later training may be distorted by undue attention to initial subsets of features that were useful for earlier memory encoding. During learning, Adaptive Resonance Theory (ART) models encode attended featural subsets, called critical feature patterns. When a novel input activates an established category, only the input features present in the critical pattern remain active in working memory.;Biased ARTMAP (bARTMAP) is a neural network that solves the problem of over-emphasis on early features by biasing attention away from previously attended features once an input has made a predictive error. Simulations on a variety of benchmark problems demonstrate that adding biasing to ARTMAP search improves recognition accuracy.
Keywords/Search Tags:Image, Model, Neural, Recognition, Completion, Memory, CONFIGR
Related items