Font Size: a A A

Deep-Learning Based Signal Detection Techniques For OTFS Systems

Posted on:2023-11-04Degree:DoctorType:Dissertation
Institution:UniversityCandidate:Enku Yosef KefyalewFull Text:PDF
GTID:1528306905996969Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
Next-generation wireless systems are expected to support a variety of use cases with a wide range of performance requirements.Interest in high-mobility use cases involving high-speed trains,unmanned vehicles/cars,drones,airplanes,etc.,is on the rise.Communication in such high-mobility and/or high carrier frequency scenarios has to deal with high Doppler shifts which are common in such environments.Existing multicarrier modulations,such as the Orthogonal frequency division multiplexing(OFDM)waveform in today’s wireless systems,can achieve high bandwidth efficiency in a linear time invariant channel,but not in a channel with both frequency and time dispersion,also known as a doubly-dispersive channel.Despite its popularity and adoption in current standards,OFDM suffers from severe performance degradation in high-Doppler scenarios.This is because of the increased loss of orthogonality among subcarriers and the resulting inter-carrier interference(ICI).In this regard,recently,a new air interface called orthogonal time-frequency(TF)space modulation(OTFS)has been proposed to address channels characterized by extreme Doppler effects.OTFS is a 2-dimensional(2D)modulation that typically transforms a time-varying fading channel into a 2D non-fading and time-independent channel in the DD domain,allowing all the modulation symbols,to spread across time and frequency to experience the same channel gain.OTFS provides a more robust and practical approach in exploiting the diversity as the DD functions are sparse in nature,where most of the energy due to channel reflectors with associated Doppler is concentrated in a few bins(due to the lower channel variability in this domain).In OTFS systems,the number of equivalent DD channel dimensions is greater than that of OFDM systems because each modulated symbol is spread across the entire TF resource grid,resulting in significantly higher signal detection complexity in the existing traditional detectors.As a result,an efficient detection scheme is highly required.On the other hand,deep learning(DL)has become the most successful machine learning method in various applications,such as computer vision and natural language processing.Due to their ability to learn a proper hierarchical representation of data,the DL can solve more complex problems where rigid mathematical models cannot be used.Besides the above domains,the DL has shown promising results in communication systems,including the physical layer.Signal detection is one of those applications in which the DL has attracted considerable attention.As such,this thesis investigates the application of DL-based signal detection for OTFS systems using data-driven approaches.More precisely,the main features of this thesis are summarized as follows:(1)We present a detailed review of the low-complexity conventional signal detectors for the OTFS system.We compile and present an overview of some of the key algorithms under different categories and illustrate their performance and complexity attributes.We also address the main challenges in designing low complexity OTFS detectors.(2)Since the input-output relation of OTFS system is in a 2D DD domain and the transmitted information symbols can exploit the full channel diversity,we propose a two-dimensional convolutional neural network(2D-CNN)based detector.Unlike most DL-based detectors,we transform the OTFS frame(N ×M complex-valued matrix)by two real-valued matrices stacked together as a three-dimensional(3D)tensor form that incorporates both the real and imaginary parts to use as input to the DNN.We also employ data augmentation(DA)techniques based on the widely used message-passing(MP)algorithm to improve the learning ability of the proposed method.Simulations results show that the proposed method has an improved performance over the MP detector and achieves nearly the performance as an optimal a posteriori(MAP)detector with very low time complexity.(3)We extend the DL-based detection method to the MIMO-OTFS systems.In most DL-based SISO/MIMO-detection,the generic DNN architectures can not efficiently learn to detect under the randomly time-varying channel scenario when directly working with the noisy received signal.Thus,most of the recent works either use the channel as input to the DNN or employ the deep unfolding(model-driven)strategy to transform an existing algorithm to the DL layers and improve the performance through learning.However,both approaches significantly increase the detection complexity in the MIMO-OTFS systems.Similar to the SISO-OTFS detection,we use 2D-CNN,which can readily exploit the DD channel to learn the MIMO-OTFS input-output relation.Thanks to the beneficial property of OTFS,only a few CNN layers are sufficient to learn the channel features at the cost of a low complexity.Furthermore,we also use a DA technique based on an existing computationally cheaper linear detector to enhance the learning and detection ability of the proposed model.Numerical results show that the proposed method can achieve better performance with less computational complexities than the state-of-the-art detectors.
Keywords/Search Tags:OTFS, MIMO-OTFS, DD channel, data-driven, 2D-CNN, computational complexity
PDF Full Text Request
Related items