| Recently,with the advent of the big data era and rapid development of internet technology,Machine Learning as a Service(MLaa S)has brought great convenience to our daily lives.In order to enhance service quality,large amounts of sensitive data are collected to update and improve the underlying machine learning models.However,with the increasing awareness of privacy and improvement of information security laws and regulations,the traditional MLaa S model encounters excessive security challenges.Thus,one urgent problem is to update and improve machine learning models while fully protecting data privacy and avoiding legal risks.Therefore,researches on privacy-preserving machine learning and privacy computing technology play a significant role in theoretical and practical values.Secret-sharing-based secure multiparty computation is one of the most practical primitives applied in privacy-preserving machine learning.One popular research topic is to reduce the communication costs and minimize the number of interaction rounds among the parties.Accordingly,this thesis aims to optimize the communication efficiency of secret-sharing based secure multiparty computation protocols.Specifically,we focus on two outsourced privacy computing scenarios: K-means clustering and quantized neural network inference.We investigate the problems in previous works and propose new solutions.Our main contributions are as follows:(1)There are three drawbacks in previous K-means clustering privacypreserving schemes: low computational efficiency,low communication efficiency and unable to provide full data privacy.Thereby in this work,we propose an efficient K-means clustering privacy-preserving scheme with replicated secret sharing.Firstly,we construct four efficient secure computation sub-protocols by using modularization.Then,we build a K-means clustering scheme by composing our four sub-protocols into a hybrid model.Meanwhile,we provide security analysis for each sub-protocol.The experiment results show that our solution obtains same accuracy as the plaintext algorithm.Compared to the previous state-of-the-art scheme,our construction improves the computation efficiency by 94.0%–96.1% and reduces the communication efficiency by 98.4%–98.6%.Therefore,our solution provides stronger practicality.(2)In general,the number of interaction rounds,from the existing secretsharing-based privacy-preserving schemes,is proportional to the depth of arithmetic circuits.Thus the previous schemes have to assume that the communication network is fast.However,this assumption is unrealistic in the wide-area network environment.To solve this problem,we focus on the scenario of quantized neural network inference,proposing an efficient privacy-preserving scheme based on masked secret sharing.Specifically,we design six secure computation sub-protocols with constant round complexity and provide security analysis for each sub-protocol in detail.Besides,we construct an efficient quantized neural network secure inference scheme according to these sub-protocols.The experiment results show that our scheme improves the online efficiency by 33.3% compared to the existing best work in wide-area network environments. |