| Deep Learning as a Service(DLaa S),a new cloud computing service model,provides various deep learning-based services to a wide range of users by deploying deep neural network models in the cloud.This service model requires direct access to the user’s computer,such as image classification,online translation,speech recognition,etc.This service model requires direct access to user data to drive the cloud-based services.This service model requires direct access to user data to drive the deep learning models in the cloud,and when user input data contains personal information,such as face This service model poses a direct threat to user privacy when user input data contains personal privacy,such as face and medical information.For cloud service providers,cloud-based models are digital assets generated with significant resources or sourced from third-party model providers.For cloud service providers,cloud-based models are digital assets generated with significant resources or sourced from third-party model providers,and therefore need to be provided with the privacy of the models.Secure computing technology provides a feasible way to solve the data security problem in this service model.User data and model parameters are encrypted and then uploaded to the cloud,and the cloud server performs computation on the dense data to realize various deep learning inference tasks.According to whether users are involved in the main process of secure computing,existing secure computing methods can be divided into two modes:end-cloud collaborative secure computing and secure outsourced computing,and current domestic and foreign research scholars have conducted studies on the secure computing of deep neural networks under different modes.However,due to the characteristics of deep neural networks with diverse structures,complex operations,and a large number of parameters,it is difficult for existing secure computing methods to take into account the dual requirements of sensitive data privacy protection and efficient execution of inference services.Therefore,it is still challenging to realize secure computation of deep neural networks on dense data efficiently and accurately without disclosing user data and model parameters.Therefore,based on an in-depth analysis of the limitations of existing research,this dissertation first designs a lightweight secure computing scheme for deep learning-as-a-service under the end-cloud collaborative secure computing model to improve the computational performance of the scheme by reducing the computational overhead on the user side while guaranteeing the privacy of user data and model parameters.To further reduce the user-side overhead,this dissertation proposes two secure and efficient solutions based on parallelized computing and approximate computing techniques,respectively,for the neural network secure computing task under the dual-cloud outsourcing model.The main work and contributions of this dissertation are summarized as follows:(1)Lightweight secure computing for deep neural networks under the end-cloud collaboration modelTo address the problem of low efficiency of deep neural network security computation due to complex model structure and high overhead of cryptographic operations in the end-cloud collaboration model,we propose a lightweight security computation framework for deep neural networks with task demand adaption.Combining strategies such as model pruning and nonlinear operation replacement,the lightweight neural network model oriented to cryptographic computation is automatically generated based on Neural Architecture Search(NAS)technology,while ensuring that the model accuracy is higher than the userdefined accuracy target.To reduce the overhead of secure computation in the nonlinear layer of the neural network,a protocol for secure computation in the nonlinear activation layer is designed based on lightweight secret sharing techniques,and a combined offline/online protocol design approach is used to transfer most of the time-consuming cryptographic data operations to the offline stage and ensure the efficiency of secure computation in the nonlinear layer in the online stage.The security of the above secure computation scheme is demonstrated under the standard security model of cryptography,and experimental validation based on three typical deep neural network models in a real end-cloud collaborative computing environment shows that this scheme reduces 2.7 ×-7.8× computation cost and2.1 ×-4.6× communication cost while ensuring model accuracy compared with existing research work.(2)Communication-efficient secure outsourced computation for deep neural networksTo address the problem of large latency of secure computation of deep neural networks caused by high communication rounds of secure computation of nonlinear layers in secure outsourcing computation mode,we propose a communication-efficient secure outsourcing computation method for deep neural networks.Based on parallel prefix adder and additive secret sharing techniques,we design secure highest valid bit extraction protocol;based on multiplicative triple tuple generation and Boolean secret sharing techniques to design secure successive and computing protocols.Combining the above secure computing protocols The secure and efficient Rectified Linear Units layer(Re LU layer)is proposed based on the dual cloud outsourcing computing model,The Re LU layer outsourcing protocol and the Max-pooling layer outsourcing protocol are proposed based on the above secure computing protocols.The interconversion of additive secret sharing and Boolean secret sharing and parallel computing techniques effectively reduce the number of communication rounds of the above non-linear layer.The communication rounds of secure outsourcing computation.The communication complexity of the above secure computation protocol is analyzed,and experiments are conducted using The proposed scheme is experimentally validated using public data sets.The theoretical and experimental analyses show that the proposed scheme achieves secure and accurate The theoretical and experimental analysis shows that the proposed scheme achieves secure and accurate deep neural network outsourcing computation and effectively reduces 3 ×-6× communication overhead.(3)Secure outsourced computation for deep neural networks based on approximate computationTo address the problem of high overhead of accurate computation of nonlinear layers of dense data in secure outsourcing computation of deep neural networks,the time-consuming nonlinear layer computation operations in deep neural networks are converted into operations suitable for dense computation based on approximate computation techniques,so as to improve the efficiency of secure outsourcing computation of deep neural networks.For the Sigmoid and Tanh activation layers commonly used in deep neural networks,the Fourier level expansion is used to fit them into the form of trigonometric functions;combined with additive secret sharing and multiplicative secret sharing,a secure and efficient trigonometric outsourcing computation protocol is designed,based on which the secure outsourcing computation of nonlinear activation layers such as Sigmoid and Tanh is realized.Combining the interconversion of additive secret sharing and multiplicative secret sharing,a secure and efficient size comparison protocol is designed,on which the secure outsourcing computation of Re LU activation layer is implemented.Based on the above protocol,a secure outsourcing computation method for deep neural networks based on approximate computation is proposed,which protects model security and user input privacy while efficiently performing outsourcing computation services.The security of the secure outsourcing computation protocol is proved under the cryptographic standard security model,and the advantages of the approximation computation-based secure outsourcing computation scheme for deep neural networks with low latency and high accuracy are verified by public datasets.This dissertation researches the security computation of deep neural networks under different computation modes,and breaks through the key theories and technologies such as lightweight neural network model generation for ciphertext computation,communication efficient nonlinear layer security computation protocol design,and neural network security outsourcing protocol design based on approximate computation,and forms a lightweight security computation method for deep neural networks,which provides data security guarantee for ”cloud model empowered end application”. |