| With the growing popularity of high-speed network environment, the traditional network intrusion detection systems (NIDS) have encountered some bottlenecks when detecting mass network data packets, including the low detection efficiency, lack of processing capacity and a higher rate of packet loss and so on. How to improve the efficiency of NIDS in high-speed network environment has become a hot topic of researches in network security. Nowadays there have already been some excellent network intrusion detection systems for high-speed network environment. However, most of them make use of specially designed acceleration hardware to improve the detection rate, which are not only high costs and inflexibility, but also only applicable to special institutions and not suitable to a large-scale popularization and promotion. In recent years, computer graphic processing unit (GPU) has made high-speed development in the performance of its hardware while the cost is relatively low. Its programmability in non-graphical area, which is called general-purpose computing on graphics processing units (GPGPU), along with its increasingly sophisticated programming tools, has been used to exploiting the immense GPU parallel processing resources, which has been significantly interested by many researchers.With this study, this paper exploits the powerful high-performance GPU parallel processing capability, combined with its unique architecture, to achieve a parallel improvement of the traditional WM (Wu-Manber) multi-pattern matching algorithm, based on which, a GPU-based multi-pattern matching algorithm, which is GPU_WM algorithm, is presented. By the experimental result, GPU_WM algorithm gains a better speedup than the original algorithm. With study and analysis of the currently popular open source intrusion detection system Snort, the following work is to use GPU_WM algorithm as Snort detection engine algorithm, aiming at improving the overall detection rate of Snort by taking advantage of high-GPU parallel processing capability. This paper first designed the Snort system architecture based on GPU detection engine, and then in order to reduce the transmission time of captured packets from host memory to GPU, a data block is transmitted to GPU as a batch. Because the three-dimensional linked list structure of Snort rule library is complex, this paper uses a simplified approach, which is extracting the pattern strings to be matched in Snort rules library to apply to GPU_WM algorithm. The simulation experiments data shows that, the text processing performance of the GPU detection engine based Snort has a little gain than the original Snort, and what's more, a certain improvement in packet loss rate is achieved. |