Font Size: a A A

A Shared Cache Partitioning Scheme For NVMe SSD

Posted on:2023-07-05Degree:MasterType:Thesis
Country:ChinaCandidate:W G LiuFull Text:PDF
GTID:2568307043475184Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
Non-volatile memory express(NVMe)solid-state drives(SSDs)have been widely adopted in emerging storage systems,which can provide multiple I/O queues and high-speed bus to maximize high data transfer rate.NVMe SSDs use streams(also called “Multi-Queue”)to store the related data in associated locations for the performance enhancements.The on-board DRAM cache inside NVMe SSDs can efficiently reduce the disk accesses and extend the lifetime of SSDs,thus improving the overall efficiency of the storage systems.However,in previous studies,such SSD cache has been only used as a shared cache for all streams or a statically partitioned cache for each stream,which may seriously degrade the performance-per-stream and underutilize the valuable cache resources.In this paper,we present MLCache(Machine Learning-based Cache Management),a space-efficient shared cache management scheme for NVMe SSD,which can be applied to data cache and cache mapping table in SSD.It can maximizes the hit ratios,as well as enhances the SSD lifetime.MLCache is also able to avoid serious damage to fairness.We formulate cache space allocation as a machine learning problem.By learning the impact of reuse distance on cache allocation,we build a workload specific neural network model.At runtime,MLCache continuously monitors the reuse distance distribution for the neural network module to obtain space-efficient allocation decisions.Aiming at the issue of fairness guarantee,a parallel write-back strategy based on hit ratio is proposed,which reduces the overhead of unfair load and ensures fairness through parallel write-back.Experimental results show MLCache improves the write hit ratio of the SSD by 24.3%compared to baseline on data cache,and improves the hit ratio of the SSD by 19.8%compared to baseline on cached mapping table.MLCache achieves garbage collection times reduction by 10.1% when compared with baseline.MLCache achieves response time reduction by 16.7% and write amplification reduction by 12.3% when compared with baseline.MLCache basically approximates the ideal model.MLCache is 95.5% similar to the ideal model.MLCache is able to ensure fairness and reduce SSD latency.
Keywords/Search Tags:NVMe SSDs, Cache partition, Reuse distance, Machine learning, Fairness
PDF Full Text Request
Related items