| Nowadays,the development of internet technology is very rapid,and the technology of data acquisition and collection is becoming more and more efficient.In military field,aerospace,medicine,e-commerce and other industries,there are a large number of data classification problems that need to be solved.As a typical method for processing massive data,machine learning learns models based on sample data,and uses the models to predict and make decisions on new data.It is very meaningful to study machine learning classification algorithms and mine useful information and knowledge from data.Typical machine learning algorithms used to handle classification tasks include artificial neural network(ANN),decision tree(DT)and ensemble learning.With the continuous deepening of quantum theory research,combining classical ANN with quantum computing to construct efficient quantum neural network(QNN)has become a research hotspot.Differential private decision tree(DiffP-DT)can protect private data by introducing noise,but it can lead to a decrease in its classification ability.The performance of ensemble learning is closely related to the accuracy and diversity of various base learners,so how to balance these two is a research hotspot.In this paper,ANN,DT,and ensemble learning in machine learning classification algorithms are thoroughly investigated.Aiming at some problems in these algorithms,relevant improvement strategies are designed,and the following results are achieved:1.Based on the classical feedforward artificial neural network(Feed-ANN),a quantum neural network based on quantized Sigmoid activation function(Sig-QNN)is proposed.In the Sig-QNN model,a quantized version that can simulate the Sigmoid activation function is designed and used to process the signals in the hidden layer and the output layer.In addition,we connect the input and output layers to strengthen the signal transmission capability of Sig-QNN.For the training of related parameters in Sig-QNN model,we give the update rules using conjugate gradient(CG)algorithm.Finally,the effectiveness of Sig-QNN is tested based on a digital recognition task.In addition,the experiment results show that the Sig-QNN model only needs a relatively small number of hidden layer neurons to obtain a stronger classification ability than Feed-ANN.2.Based on the multi-level activated quantum neural network(MLQNN),a multi-level activated quantum weighted neural network(W-MLQNN)is proposed.The MLQNN model only imitates the concept of quantum superposition and the information transmission process is not based on quantum theory of computation.We introduce quantum weighted neurons in the output layer of W-MLQNN,so that quantum computing theory can be really used in the signal transmission process.For the training of parameters such as weights in the WMLQNN model,we show the update rules using the Levenberg-Marquardt(LM)algorithm.Finally,the feasibility of W-MLQNN model is verified by lie detection experiments.The W-MLQNN model only needs a relatively small number of hidden layer neurons to obtain a stronger classification ability than the MLQNN.3.Because the random subspace technology is easy to lose important features and generate some redundant subspaces,we propose an ensemble learning algorithm based on local AdaBoost(ELLA)is proposed.The ELLA algorithm uses fixed feature subset strategy to avoid the loss of important features in the process of generating random subspaces.In addition,based on the AdaBoost,a local AdaBoost using sample neighborhood information is proposed.The ELLA algorithm uses the local AdaBoost to generate a set of base classifiers to be selected in each iteration,and then selects the best base classifier based on a correlation loss function and adds it to the new ensemble classifier set in each iteration.Finally,the ELLA algorithm uses the quantum genetic algorithm(QGA)to search a group of voting weights applicable to the new ensemble classifiers.Through several experiments,the effectiveness of ELLA is tested,and it is verified that ELLA can obtain stronger classification ability than relevant comparison algorithms.4.Based on the differential private decision tree using maximal class label(MaxDiffPDT),an adaptive differential private decision tree(AdaDiffP-DT)and an ensemble model using the AdaDiffP-DT as base classifiers(EnAdaDiffP-DT)are proposed.AdaDiffP-DT uses exponential mechanism to add noise in the feature splitting process of the inner nodes to avoid the secondary division of the privacy budget,and designs a strategy on the leaf nodes that can adaptively adjust the privacy budget according to the distribution of sample categories.EnAdaDiffP-DT can save the privacy budget by generating sample subspaces using random sampling without replacement,and it designs a multi-population quantum genetic algorithm(Multi P-QGA)to search the appropriate voting weight for each base classifier AdaDiffP-DT.The effectiveness of AdaDiffP-DT and EnAdaDiffP-DT is verified through several experiments.Compared with related algorithms,the experimental results show that AdaDiffP-DT can obtain stronger classification ability than MaxDiffP-DT,and EnAdaDiffPDT can obtain stronger classification ability than the relevant comparison algorithms. |