Font Size: a A A

Research And Design Of Cache In High Concurrency Web Scenarios Based On Nginx And Redis

Posted on:2022-01-28Degree:MasterType:Thesis
Country:ChinaCandidate:S ChenFull Text:PDF
GTID:2518306557961769Subject:Electronics and Communications Engineering
Abstract/Summary:PDF Full Text Request
With the explosive growth of the system's demand for high concurrency,simply improving the server configuration,increasing network bandwidth and using higher-performance storage hardware on the original architecture can no longer meet the performance requirements of the system's large throughput rate.How to further improve the performance of the application system and how to improve the system's capacity expansion to meet the system capacity requirements has become the focus of research and solutions for major websites and technicians.Traditional solutions include the separation of application and data reading and writing,the introduction of caching to improve system performance,the use of clusters and reverse agents,as well as the separation of database reading and writing,and database and table splitting.Among them,the use of cache plays an important role in system design optimization.However,the traditional cache use method is to simply introduce a layer of cache database or set up a cache locally in the application.In the high concurrency scene of the Web,because the application server frequently establishes a network connection with the cache database,it brings a lot of network overhead and reduces system performance.Therefore,based on the in-depth study of the original two-level cache of application platform cache and Redis database cluster cache,this article establishes a multi-level cache strategy based on Nginx local cache,application platform cache and Redis database cluster cache.In addition,in the high concurrent write scenario,the existence of cache in the system will cause the inconsistency of the data in the cache and the database.Based on the analysis of traditional strong consistency solutions,this article designs a message queue solution,which reduces the number of applications.The complexity of the layer code design also ensures the final consistency of the data.The main tasks completed in this paper are as follows:(1)Aiming at the problem of high latency of frequently establishing network connections between the application server and the cache database in the traditional response to the high concurrent reading of the Web,the pros and cons of different levels of caching processes are analyzed,and a local cache based on Nginx,application platform cache and Redis database is proposed.The cluster cache solution strategy,which puts the cache at multiple levels,effectively improves the response speed and throughput of the service.(2)Aiming at the problem of inconsistency between cache and database data,this article analyzes the lack of update cache and strong database consistency.According to the CAP theory,Rocket Message Queue is used to achieve partition tolerance and high availability of distributed systems.The system does not require data In the case of strong consistency,the use of message queues not only reduces the complexity of the application layer code,but also improves the performance of high concurrent write operations.(3)Perform performance tests on different caching strategies through JMeter performance stress testing tools,and test the performance of traditional solutions and message queue solutions with inconsistent data in the system.After experimental comparison,the multi-level cache and message queue designed in this paper can well meet the read and write requirements in high-concurrency Web scenarios.
Keywords/Search Tags:high concurrency, multi-level cache mechanism, data consistency, distributed database
PDF Full Text Request
Related items