Font Size: a A A

Research On Fine-grained Cache Management

Posted on:2022-07-25Degree:MasterType:Thesis
Country:ChinaCandidate:Y X LiFull Text:PDF
GTID:2568306737988429Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Data cache is a key component to improve memory access performance in storage systems.With the rapid development of big data applications and cloud computing,the workload intensity and concurrency of storage servers are increasing,and the requirements for storage cache are also increasing.Especially,in order to achieve high storage access performance and I/O throughput,server-side storage caches began to manage cache data in large-grained pages.However,using large-grained pages to manage cache data presents two serious problems for traditional caching algorithms.First,the traditional caching algorithms manage and replace cache data in units of pages.when a page is missing,even if the I/O requests only access one byte of the page,the entire page must be read from the underlying storage device to the cache.As a result,the cache space is polluted and the cache utilization decreases.Moreover,the larger the cache page size,the more serious the cache space pollution phenomenon.Second,different data in the same large cache page usually have different memory access patterns and different access locality characteristics,resulting in the problem of hot and cold data mixed together in the same cache page.Traditional caching algorithms only record the popularity of the entire cache page in the unit of the page,and no matter how much data is accessed each time,the popularity of the entire page will increase uniformly.In this case,the problem of hot and cold data mixed storage will become more and more serious,which will seriously affect the correctness of cache data access and replacement,reduce the cache hit ratio,and affect the overall performance of cache.Aiming to solve the above problems,this thesis proposes a cache architecture with variable page size based on fine-grained management,called AIRCache.AIRCache consists of four parts: Page Mapping Table(PMT),Fine-Grained Recorder(FGR),Multi-Granularity Writer(MGW)and Multi-Granularity Replacement(MGR).PMT mainly records metadata information of cache pages.FGR mainly divides the large-granularity cache page into smaller sub-blocks and records its detailed access status information.MGW writes data to the cache in varying granularity.The MGR evicts data from the cache in different granularity to ensure high utilization of cache space and high cache hit ratio.In order to verify the effectiveness of AIRCache,this thesis implements the AIRCache algorithm based on the traditional cache replacement algorithms LRU and ARC,called AIR-LRU and AIR-ARC respectively,and evaluates AIR-LRU and AIR-ARC by using the workload of FIU dataset.Experimental results show that compared with the traditional LRU and ARC algorithms,the read cache hit rate of AIR-LRU and AIR-ARC has increased by up to 7.59 times and 2.05 times,and the cache space utilization has increased by 2.1 times and 3.1 times respectively.
Keywords/Search Tags:Storage cache, Fine-grained management, Hot and cold data identification
PDF Full Text Request
Related items