Font Size: a A A

Theoretical And Practical Research On Privacy-preserving Machine Learning

Posted on:2024-07-16Degree:MasterType:Thesis
Country:ChinaCandidate:M Q WeiFull Text:PDF
GTID:2568307067493324Subject:Software engineering
Abstract/Summary:PDF Full Text Request
In recent years,the improvement of hardware devices and the massive amount of data in the network have greatly promoted the development of machine learning technology,which has gradually received more and more attention and given rise to many related researches and applications.As the use of the technology becomes more and more widespread,the practical applications are landed,and the industry is developed and improved,more and more security issues are also concerned by the academic and industrial communities.Especially in some fields involving sensitive data,how to perform machine learning algorithms while preserving data privacy has become an urgent problem.Some scholars have proposed that privacy-preserving machine learning frameworks can be designed using techniques such as Secure Multi-party Computation,Homomorphic Encryption,Federa L Learning,and Differential Privacy in cryptography.Due to the complexity of cryptography techniques,these frameworks are usually inefficient and a long way from application implementation.Scholars in both security and artificial intelligence fields have continuously worked to improve the efficiency of security frameworks,and there have been major breakthroughs and improvements in recent years.However,this thesis finds that some security issues have not been taken seriously during the development of privacy-preserving machine learning frameworks,and some security frameworks have security flaws.In this thesis,we provide detailed security analysis and data analysis to demonstrate the existence of the flaws and experimentally illustrate how such security flaws will lead to data leakage problems.This thesis is also concerned with the low efficiency of graph parallel computing frameworks in the field of machine learning privacy protection,so a new,efficient graph parallel computing framework that provides robustness is designed and its efficiency is experimentally demonstrated.The main contributions of this thesis are as follows:1.Analysis and validation of data leakage problem in Privacy-Preserving Ma-chine Learning framework.SCSDF is an efficient,semi-honestly secure,three-way framework for privacy-preserving neural network inference,which improves the computational efficiency of the nonlinear layer in the security framework by designing an efficient DRe LU protocol.However,we find that there is a serious data leakage problem during SCSDF execution,which undermines the security of the framework.In this thesis,we first provide a detailed security analysis of the DRe LU protocol in SCSDF from the perspective of the realistic-ideal paradigm.After that,the thesis delves into specific steps in SCSDF to demonstrate that the symbols of input data are leaked to third parties responsible for assisting the pro-tocol execution when the protocol is executed.The experiments even show that SCSDF can leak users’ private data during the inference process and cannot really guarantee users’ privacy.In addition,this thesis proposes some potential solutions for this security flaw.2.Privacy-preserving graph parallel computation framework.Several researchers have worked on providing privacy preservation for graph parallel computation frame-works,however,the large amount of data causes the efficiency of such frameworks to remain low.In this thesis,we propose a new,secure and efficient graph parallel computation framework against malicious adversaries,Co Lo RGraph,which mainly utilizes group computing techniques to reduce the time complexity of the Shuffle operation of secure graph parallel computation from linear to constant level.At the same time,Co Lo RGraph utilizes repeated secret sharing techniques to resist mali-cious adversaries,reducing the communication overhead relative to the MAC au-thentication techniques currently in use.In addition,Co Lo RGraph provides robust-ness to users.This thesis concludes with experimental evidence that Co Lo RGraph is significantly more efficient than current frameworks.
Keywords/Search Tags:Secure Multi-Party Computation, Privacy-Preserving Machine Learning, Data Leakage, Graph Parallel Computation
PDF Full Text Request
Related items