Font Size: a A A

Modeling And Analysis Of Brian Functions By Training Spiking Neural Networks With Spike Prop Algorithm

Posted on:2021-05-12Degree:DoctorType:Dissertation
Country:ChinaCandidate:C F HongFull Text:PDF
GTID:1480306548473624Subject:Detection Technology and Automation
Abstract/Summary:PDF Full Text Request
Brain is the key of human intelligence,studying the mechanisms underlie brain's cognitive function have both academical and practical values.Due to the recent development in deep learning,artificial intelligence based on artificial neural networks have achieved considerable success in applications such as image recognition,speech recognition,and game playing.By contrast,our understanding of the underlying mechanisms of brain's cognitive function is still very limited.And the huge gap between deep learning models and biological neural systems have hindered the further development of deep learning towards a more general artificial intelligence.In this thesis,a new approach is proposed to study network mechanisms of brain function based on trained spiking neural network.This framework provides a versatile modeling tool for theoretical neuroscience study,and provides a basic model for further developments in general artificial intelligence.The main contents of this thesis is listed as follow:Firstly,brain functions are carried out the neurons transmitting and processing spiking activities through the network,and the single neuronal dynamics is the basic unit of this neural processing.Hence,this work discussed the modeling of single neuron and neural network activities.Several generalized leaky integrate-and-fire models are proposed to capture nonlinear neural dynamics at different time scales.Based on an canonical neural network model,the relationship between network structure properties and network activity characteristics such as oscillations,correlations and irregularities.Furthermore,the feasibility of temporal code under noisy condition have been tested in a feedforward propagation experiment.The results provides a theoretical foundation for building functional neural networks.Secondly,the learning rule that back-propagate errors through spiking timing is introduced to build the functional network.The gradient learning rule for the proposed generalized neuron models is derived.And the algorithms have demonstrated to be valid in various network structures and dynamical states.The problem of slow and unstable in Spike Prop kind learning algorithms has been analyzed.Based on inferred causes,a biologically reasonable spike threshold rule is proposed to set a lower-limit of the gradient of membrane potential and several modulation methods on firing rate and synaptic weights are introduced.A new theoretical study approach is discussed,which is modeling brain functions by training spiking neural networks to perform certain tasks,and explore the underlaying mechanisms by manipulate and analyze the trained neural network.Lastly,to verify the generality and versatility of the proposed framework,the learning algorithms are applied to several cognitive tasks,and network properties and dynamical mechanisms of the trained networks are discussed.With different network structures,neural dynamical states and cost functions,the networks have been trained to achieve image classification,motion planning and feedback motion control tasks respectively.In the image classification tasks,a fully-connected network is used to learn MNIST handwriting digits classification task and a network with local connections is trained to classify Caltech dataset images.By systematically manipulating the network,we have discussed how network structure biases and biological constrains can affect the learning procedure and information processing mechanisms of the trained network.In the motion control tasks,GLIF neuron model is adopted to capture multi-timescale dynamics,and learned the controlling scheme with Spike Prop kind algorithms.The network is first trained to perform motion planning task,which receive motor command encoded by synchronous spikes,and output desired motion trajectory carried out by sustained spiking activity.This task proved that information in spike timing can be readily transformed to longer timescale in neural networks.Next,feedback motion control using temporal code is achieved with both supervised learning and reinforcement learning methods.Furthermore,two important demands in real world control problems are discussed in the reinforcement learning experiments,which are force optimization and robustness of the control system respect to the variations in the plant.Above learning experiments have demonstrated the feasibility of the proposed framework on various network structures and dynamics.Further works can introduce more biological properties into the spiking neural network,and further link network mechanisms to brain functions.This work give insight for both theoretical research in neuroscience and further development of intelligence applications.
Keywords/Search Tags:Spiking Neural Network, Spike Prop Error Back-propagation, Brain Function, Temporal Coding, Supervised Learning, Reinforcement Learning
PDF Full Text Request
Related items