Font Size: a A A

Research On Distributed Deep Neural Network For Edge Computing

Posted on:2020-06-05Degree:MasterType:Thesis
Country:ChinaCandidate:Y ZouFull Text:PDF
GTID:2428330590458270Subject:Systems Engineering
Abstract/Summary:PDF Full Text Request
Recently,deep neural network can be trained by big data obtained from numerous Internet of Things(IoT)devices,and then can be used to solve practical problems in decision making of IoT devices,which makes IoT devices more intelligent and more practical in applications.However,on the one hand,the computing and storage resources of IoT devices are limited,thus many neural network models can not be directly used in IoT devices;on the other hand,the cloud computing method will lead to high bandwidth cost and high latency for users.Therefore,the traditional cloud-based deep learning and IoT-based deep learning can not be well applied in the IoT environment.To solve the above problems,this thesis studies the Distributed Deep Neural Network(DDNN)based on cloud-edge-end collaborative computing to improve the existing solutions.Firstly,based on the data characteristics of IoT devices,this thesis,by considering the difference of importance of every view,presents a multi-view feature weighted fusion method to reduce the amount of data transmission from edge side to cloud side and then further improve the accuracy of DDNN compared with traditional fusion method.Secondly,for the problem of too many parameters in convolutional layer and fully connected layer,a lightweight DDNN model with slippable exit is designed in this thesis.On the one hand,the computational complexity of edge-side convolution layer can be reduced by moving the exit point to the lower level of DDNN;on the other hand,the computational complexity of edge-side fully-connected layers can be reduced by compressing feature maps with a trainable convolutional bag-of-features model.Therefore,this model can improve the accuracy of DDNN with limited computing resources.Finally,inspired by the training method of Teacher-Student network,this thesis proposes a new training strategy for DDNN for better performance including distinguishing the complex levels of samples and then co-training the models of edge side and cloud side through sample-based adaptive weighting method.
Keywords/Search Tags:Edge Computing, Distributed Deep Neural Network, Collaborative Inference, Model Compression
PDF Full Text Request
Related items