Deep learning is a branch of machine learning.It is a method of characterizing and learning data based on artificial neural networks.Deep learning has achieved sub-versive results in the fields of image processing,speech recognition,and natural lan-guage processing in recent years,and has therefore been widely used in multimedia stream processing services.In practical applications,deep learning is often embedded as a module in a stream processing scenario.Although high-performance computing devices that support video codecs or deep learning calculations are constantly being in-troduced,there is no universal way to implement streaming communication between different devices.At the same time,the continuous change of the business scenario of the multimedia stream processing requires that the system be frequently adjusted ac-cording to the specific scenario.In order to better integrate and coordinate deep learn-ing and multimedia processing,and enhance the performance,portability and scalability of multimedia stream processing services,this thesis designs and implements a set of multimedia stream processing framework for deep learning.The main contributions include:1.A software pipeline is used to implement a multimedia service processing solu-tion for real-life scenarios,which can achieve good throughput performance in application scenarios such as multimedia processing.Experiments show that on a deep learning acceleration board,with the classic VGG16-SSD as the inference network,high throughput of nearly 300 frames per second can be obtained.2.A storage management system under a stream processing scenario is imple-mented.The system automatically manages the storage resources required for data interaction between different modules in the multimedia stream processing service,and optimizes the utilization of the storage resources according to the characteristics of the streaming processing mode.3.A universal plug-in system was implemented.The plugin system provides a uni-fied interface for interacting with the framework for each module in the stream processing business.Through the plug-in system,the user can tailor and expand the modules in the existing stream processing service,thereby flexibly customiz-ing the pipeline structure of the entire service,and the other party can quickly embed on each node of the service.Different processing modules on a variety of different hardware devices.4.Aiming at the problem that deep learning programming framework and deep learning are difficult to combine,a method of software abstraction using inter-mediate representation close to hardware is proposed.This approach allows deep learning applications to be deployed away from the deep learning programming framework,while also making the hardware adaptation of the application more specific and modular. |