| As a way of processing big data,edge computing has attracted more and more attention to the security and efficiency of its model parameters during the transmission process.Under the background of this era,users’ awareness of privacy protection is gradually improving,and they do not want to obtain services at the expense of privacy.At the same time,as a large number of intelligent terminal devices are connected to the network,data has exploded,and most of the data has no use value,which reduces the availability of the trained model and easily brings network transmission pressure,.Therefore,all model parameters trained on the edge side do not need to be uploaded.Therefore,in the edge computing scenario,this paper studies the privacy protection and compression of model parameters to ensure the privacy security and network transmission pressure of edge users.The details are as follows:(1)This paper proposes a federated learning privacy-preserving algorithm based on local differential privacy.The algorithm uses the federated learning algorithm under the cloud-edge-device three-layer model architecture.The training data will not leave the local,and the data information is shared between edge users and edge devices through the transfer of model parameters.The edge device obtains local training model parameters by training local data.In order to further protect the privacy and security of user data,local differential privacy is used to add disturbance to the local model parameters participating in the training before uploading,and the encrypted model parameters are then aggregated to ensure local Security of model parameters during transport aggregation.The experimental results show that the proposed method achieves the protection of user privacy in edge computing.(2)This paper proposes the Adam gradient optimization compression algorithm based on Top-K.The implementation of the algorithm is based on the edge-federated learning framework.First,the gradient of the model trained by the edge device is sparsely processed to obtain the compressed gradient;then the gradient is selected according to Top-K,and the threshold value is set by setting the threshold K to upload the gradient that meets the threshold,the gradients that do not meet the threshold are retained locally and participate in the next round of training until the gradient accumulation meets the conditions for upload.For the model gradients that need to be uploaded,the Adam adaptive algorithm uses the history matrix as a regular term,regularizes the sparse matrix,dynamically adjusts the learning rate of the model parameters,and speeds up the model convergence;and for the locally accumulated gradients,the momentum is used.SGD adds sparse gradients to correct the gradients,so as to avoid the excessive sparseness of the gradients accumulated locally and affect the convergence of the model.The experimental results show that the proposed method achieves the compression of model parameters and ensures the convergence of the model.Figure [23] Table [6] Reference [81]... |