| Background modeling is widely used method in the detection of moving target and also a key technology of image processing. The nature of image processing is large-scale calculation which is particularly suitable for parallel processing. This paper mainly used the architecture of CPU(Central Processing Unit)—TBB(Threading Building Block) and the architecture of GPU(Graphic Processing Unit)—CUDA(Compute Unified Device Architecture) combined with the features of background modeling algorithm to improve the algorithms. According to the comparison of experiments, an optimal parallel algorithm would be chosen to achieve a better acceleration for background modeling algorithm.Both single-Gaussian and GMM are classical algorithms in image processing. The serial algorithms have common characteristics which are large data storage capacity, complicated parameters, high complexity. Based on the long running time of the existing CPU serial calculation method and the operation of each pixel is relatively independent, it is very suitable for parallel processing. This paper makes full use of CUDA and TBB and proposes a hybrid architecture system of TBB and CUDA executed on single-Gaussian and Gaussian Mixture Models.Firstly, this paper introduces a lot relevant knowledge of TBB and CUDA; then show the framework of the hybrid architecture; Secondly, this paper introduces the basic theories of single-Gaussian and Gaussian Mixture Models and the realization of the serial algorithms. The part of time-consuming and parallel sections will be handled by analyzing the serial algorithms. Thirdly, this paper introduces the implementation of the three parallel methods for background modeling algorithms. Finally, analyzing the results of each parallel method, find the most suitable parallel solution for single-Gaussian is TBB-CUDA hybrid method. CUDA method is better for Gaussian Mixture Model. Due to the analysis of these two background modeling algorithms, the advantages and disadvantages of each TBB and CUDA should be considered carefully, then algorithms can get an ideal acceleration effect. |