Font Size: a A A

Research On Privacy Protection Technology Based On Deep Learning In Cloud-Edge Collaboration Environment

Posted on:2023-03-12Degree:MasterType:Thesis
Country:ChinaCandidate:J M YangFull Text:PDF
GTID:2558307097494584Subject:Computer technology
Abstract/Summary:PDF Full Text Request
In recent years,deep learning has achieved remarkable success in many application fields such as computer vision,audio processing,and natural language processing.At the same time,human society has higher and higher requirements for deep learning services.People are increasingly looking for convenient deep learning services on their devices,such as object recognition,language translation,health monitoring,and more.Cloud-based deep learning service solutions are a promising approach,where most of the deep neural network will be offloaded to the cloud,and users only need to send data to the cloud for services.However,while cloud-based deep learning services bring benefits,it creates potential privacy risks when transferring sensitive data from mobile devices to untrusted cloud servers.1.In order to address the privacy of sensitive data in cloud-based deep learning,this paper explores the feasibility of steganography in ensuring inference privacy.Specifically,this paper first designs GHOST,an intrusive deep learning privacy protection scheme,which uses two different steganography techniques to protect privacy,making sensitive data hidden and invisible during the inference stage.Benefiting from the idea of backdoor attack,the GHOST scheme uses both the steganographic data and the original data to retrain the cloud network into a poisoned network to learn the characteristics of sensitive images in the steganographic images,so that the cloud network can achieve the expected classification of the steganographic data,without reducing the inference accuracy of the original public data.2.Given that most cloud service providers do not allow changing the structure and weights of cloud networks,this paper further proposes a nonintrusive deep learning privacy protection scheme GHOST+.Since DNNs are inherently vulnerable to adversarial attacks,the main idea of GHOST+is to turn the vulnerability into a weapon to protect data privacy,enabling DNNs to misclassify hidden images into the sensitive image classes hidden in them.GHOST+not only hides sensitive images into public images before uploading to protect privacy,but also misleads DNNs to output expected classification results for high inference accuracy.The main difference between the GHOST and GHOST+schemes is that GHOST retrains the DNN as a poisoned network to learn the hidden features of sensitive images,but GHOST+utilizes a generative adversarial network to generate adversarial perturbations without changing the DNN.For enhanced privacy and better computing-communication trade-offs,both solutions employ an edge-cloud collaboration framework.Compared with previous solutions,this is the first work that successfully combines the characteristics of steganography and DNN to achieve private inference while ensuring high accuracy.Finally,through a series of privacy theoretical analysis and experiments on real datasets,this paper proves the security and effectiveness of the two schemes proposed in this paper.
Keywords/Search Tags:deep learning, cloud computing, edge computing, steganography, adversarial attacks, inference privacy
PDF Full Text Request
Related items