Font Size: a A A

Mobile Host Track Prediction Based On Neural Network

Posted on:2008-04-26Degree:MasterType:Thesis
Country:ChinaCandidate:L ZhangFull Text:PDF
GTID:2178360212496751Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
Because of its high bandwidth with 54Mbps or more,IEEE 802.11WLANhasbeenextensivelyused.WLANsmallerradiusofthedistrictleadto a more frequent handoff, terminal application QoS have a greater impact.If the mobile host be able to predict the new AP that reach next, can reduceswitching time, improve WLAN real-time communication capabilities.Scholars in Dartmouth College in US found that simpler k-order Markovpredictorsperformbetterthanotherpredictors.Howeverhigh-orderMarkovpredictor exists an expansion of the state space problem .With the APnumber increasing, the state space shows exponential growth.To solve thisproblem, this thesis combines neural network technology with MH trackpredictinon, puts forward a new neural network predictor based on Elmannetwork,achievesbetterexperimentalresults.Neural networks which are an information processing system, can behighly nonlinear dynamic systems. It has some good features such aslarge-scale parallel processing, adaptivity, self-organizing, self-learning,distributed storage and so on. So control systems using neural network aremore adaptable and more robust. By connecting methods, Neural networkcanbedividedintothreetypes:feedforwardneuralnetworks,recurrentneuralnetworks, self-organizing neural networks. Among them, feedforward neuralnetworks and recurrent neural networks are usually used in time seriesprediction.Mobile hosts'track is essentiallya time series. By using the Matlab andNeurosolution software environment as tools, thisthesis uses the BP networkbelonged feedforward neural networks and Elman network belongedRecurrentNetworkstomakepredictionaboutmobilehosts'track.This thesis uses eight-bit binary numbers to express AP.In essence, It isa8-dimensionalvector,allcomponentsofitare0or1.Theexpressionbringstwo problems: First, if networks'scale is relatively small, some frontcomponents of samples vectors are similar to 0.That mean,some frontcomponents of inputing vectors and expectations vectors will not change inthe training process , will result network weights can not gets effectiveadjustment; Second, sample vectors use 0 and 1, As BP network input layerand the hidden layer activation function is not sensitive to 0 and 1, Reducethe convergence speed, and even network can not achieve a state ofconvergence ; In this thesis, using a special sample data pretreatment,experimentsshowthatimprovesthepredictionaccuracy.The static feedforward neural networks using back-propagationalgorithm have been widely applied. Kolmogorov multilayer neural networkmapping theorem can prove choosing the appropriate layers and hiddennodes, BP network can approximate any nonlinear function with any degreeof accuracy. This thesis presents mathematical model of BP network andback-propagation algorithm, sums up various defects of the BP network andputsforwardthecorrespondingsolutionusingBPnetworks.Theoretical analysis and experimental prove that, when the eigenvaluesbetween the samples have relativelysmall difference, high-order BP networkuse too much time to achieve convergence, Although it has better predictionresults than Markov predictor. If the eigenvalues between the samples haverelatively great difference, not only BP network train a long time, BPnetwork will not be easy to achieve convergence. As result, the predictionaccuracywillbebad.Recurrent Neural networks are able to directly and vividly reflect thedynamic characteristics. Compared to static neural network, recurrentnetworks needn't presuppose the order of systems, provide a very promisingarea for the Identification and Control Systems. Because of their inherentfeedback structure, recurrent neural networks often can express complexdynamic system, which was close to the dynamic process. This thesisintroduces an improved Elman network model belonged to recurrent neuralnetworks. Based on the basic Elman network, it introduces fixed-gain on theneurons of context layer. Using the standard back-propagation algorithm fortraining, this network is able to approximate high dynamic system. Throughtheoretical analysis and experiments, its training speed has been greatlyimproved compared with BP neural networks because of its less number ofneurons in the hidden layer. Also because the neurons of context layerintroduce feedback, Elman network has better prediction accuracy than BPnetworks. Especially, when the eigenvalues between the samples haverelatively small difference, the prediction can be achieved better results.Results reflect that Elmannetwork has thehigher abilitytoresist interference.Bystructurerestrictions,whentheElmannetwork identifyhigh-order system,the results are not satisfactory. Through optimization can improve thepredictiveaccuracyandconvergencespeed.To further improve the approximation capability and dynamicperformance, we must make full use of context layers of Elman network,enhance the role of context layer. Elman network model only introduces thehidden layer neurons'feedback, without taking into account the feedbackfrom the output layer neurons. Because of the feedback information ofneurons in all layers will affect network signal processing capabilities,thisthesis presents a modified Elman network model which introduces the outputlayer neurons feedback. then derives the learning algorithm by usingback-propagation algorithm. Finally, the stability of the improved Elmanneural networks is proved in the sense of Lyapunov stability theory. Theoptimal learning rates are obtained, which can guarantee the stableconvergenceoftheimprovedElmannetworks.Experiment results show, in the condition of not increasing the numberof neurons and weights, the modified Elman network improve predictionaccuracy and convergence speed. The new network has the similarperformance with k-order Markov predictor model with lower costs. whenthe eigenvalue between the samples have relatively small difference, themodified Elman can preferably predict the new AP with successful ratioabout75~80%.
Keywords/Search Tags:Prediction
PDF Full Text Request
Related items