| Since the advent of commercial 5G,the field of broadband wireless communication technology has undergone rapid development.However,the availability of spectrum resources has become increasingly constrained,and traditional communication frequency bands no longer suffice for future mobile communication development.As such,the application of millimeter wave bands in B5 G and 6G systems can help achieve faster data rates,greater system capacity,higher reliability,and lower latency.Nonetheless,millimeter wave technology faces various obstacles such as high path loss,susceptibility to environmental interference,and limited coverage area.Although the use of massive multiple-input multiple-output(Massive MIMO)systems in combination with beamforming(BF)technology can effectively address some of these deficiencies,significant technical challenges still persist.Specifically,the deployment of large-scale millimeter-wave array antennas can generate higher resolution beams,thereby leading to greater beam search complexity at both transceivers and transmitters.Additionally,ultra-dense network deployment may cause more severe inter-cell interference,and while predefined codebooks can reduce search overhead,the accumulated and varying inter-cell interference due to the dense deployment of base stations can limit the upper performance limit of BF.To tackle these issues,this paper explores feature-based enhanced techniques for millimeter wave beam search.First,this paper proposes a beam feature selection and prediction algorithm based on communication scenarios,aimed at reducing the time and power overhead of millimeter wave beam search.Specifically,the algorithm selects the optimal feature beam by the power loss probability minimizing criterion in the first step,and then generates a set of feature beams(a subset of the beam index)based on the optimal beam probability.In the second step,optimal beam probabilities of the communication scenario are obtained using a local learning-based feature selection clustering algorithm(LLC-fs).As a result of the implicit,nonlinear mapping between the scenario-based feature beam set and the optimal beam,a deep neural network(DNN)model is utilized to approximate the mapping,followed by an offline trained model to predict the optimal beam from the feature beam set.Simulation results demonstrate that the scenario-based DNN model can predict the optimal millimeter beam online using an offline trained model,with a prediction performance that closely approximates that of the exhaustive beam search algorithm,and effectively reduces the beam search overhead.Second,this paper proposes a millimeter-wave codebook training deep reinforcement learning(DRL)algorithm based on scene features and interference suppression,aimed at suppressing cumulative and varying intercell interference,improving BF gain,and reducing beam search complexity.In the first step,an anti-interference beam pattern training algorithm based on DRL is introduced to maximize the signal-to-noise ratio(SINR)of target users,relying only on the measured received power,and to train the anti-interference beam pattern for a set of target users who are one or share a similar channel.In the second step,a scenario-based anti-interference beam codebook training algorithm is proposed,which utilizes the SINR feature matrix and clusters all dynamic users within the scenario.Based on the clustering results,parallel anti-interference beam pattern training networks are assigned to each group of user clusters.Multiple parallel networks then adaptively optimize the beam patterns of the target user clusters by sensing the environment,user distribution characteristics,interference sources,and array damage.Finally,anti-interference codebooks are generated.Simulation results demonstrate that the codebooks generated by the proposed algorithm can effectively suppress interference,sense array damage,improve BF gain,and reduce beam search complexity(less number of beams)under various scenarios. |