Font Size: a A A

Wireless Powered Communication Network Based On Autonomous Navigation Of UAV

Posted on:2022-12-11Degree:MasterType:Thesis
Country:ChinaCandidate:Y T LiFull Text:PDF
GTID:2492306764462194Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
In wireless powered communication networks(WPCNs),unmanned aerial vehicles(UAVs)are usually used as aerial base stations to charge sensors and collect data due to their high mobility and low cost,which can effectively prolong network life.Using UAVs as air base stations can be used to overcome the unfairness of users caused by the ”double far-near” problem in traditional fixed base station WPCNs,and by flexibly reducing the signal between UAVs and ground equipment propagation distance to increase the data rate.It can support better communication links between air and ground terminals due to less signal blocking and shadowing effects.However,in most scenarios,the UAV does not have enough prior knowledge of the sensor location to obtain the optimal trajectory in advance,requiring autonomous navigation and real-time decision-making.Therefore,in thesis we investigate UAV-based autonomous navigation for data collection from batteryless sensors without complete knowledge of the sensor location.In thesis,the optimization problem is modeled as a Markov decision process,and the deep reinforcement learning(DRL)algorithm is used to solve the problem based on the battery state,channel conditions and current data collections of the sensors in the coverage area of the UAV,as well as the position of the UAV.By jointly optimizing the UAV’s steering angle,speed,and operating mode,we can maximize the average data collection of all sensors,meet the energy requirements of batteryless sensors,and guarantee fairness of data collection among all sensors.Thesis starts from two aspects.On the one hand,in the single UAV scenario,the traditional deep Q-network(DQN)algorithm is used to complete the data collection and path planning tasks while ensuring the energy requirements of the sensor? on the other hand,in the multi-UAV scenario,Three deep reinforcement learning-based multi-agent algorithms are proposed to solve the autonomous navigation problem,namely deep Q-network with shared state(DQN-SS),deep Q-network with shared actions(DQN-SA)and asynchronous multi-agent deep Q-network(MADQN),maximizing average data collection while keeping the UAV swarm collision-free in a multi-UAV state,which is suitable for practical scenarios with different business needs.The effectiveness of the autonomous navigation algorithm is verified by simulation.The simulation results show that in terms of maximizing the average data collections and autonomous navigation,the DQN algorithm in the single UAV scene significantly improves the network performance? in the multi-UAV scene,the MADQN algorithm adopts a distributed sampling and centralized learning approach to solve the autonomous navigation problem of multi-UAV,which is able to achieve wider coverage and higher average data collection compared with the other two proposed multi-agent algorithms.
Keywords/Search Tags:Unmanned aerial vehicle(UAV), Wireless power transfer(WPT), Data collection, Autonomous navigation, Deep reinforcement learning(DRL)
PDF Full Text Request
Related items