Font Size: a A A

Research On Low Power And High Efficient Data Layout For Distributed Key Value Storage Systems

Posted on:2017-05-06Degree:DoctorType:Dissertation
Country:ChinaCandidate:N N ZhaoFull Text:PDF
GTID:1318330485950831Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
Distributed key value storage systems are among the most important types of distribut-ed storage systems currently being deployed in data centers. Nevertheless, enterprise data centers are facing growing pressure to reduce their power consumption. Server power con-sumption is an important issue for distributed storage systems because it contributes sub-stantially to a data center's power bills. Furthermore, as the size of data increases, massive amounts of storage are becoming more necessary. Hence, server power consumption has become an urgent priority.Currently, there exists a large body of research on reducing power consumption by powering down a subset of storage devices or servers. However, powering servers down in distributed key value storage systems is challenging since reliability may be compromised.In addition, as NAND flash-based SSDs continue to grow enormously both in capacity and popularity, the adoption of SSDs in server storage as cache and as storage is accelerating because of its superiority in performance and energy consumption. However, the I/O work-load is unbalanced across different flash-based servers, which incurs wear unbalance. Wear unbalance has a significant impact on the reliability and performance as well as lifetime of the whole distributed key value storage cluster.To overcome these problems, this paper first proposes GreenCHT, a power manage-ment for consistent hashing based distributed key value storage systems. It consists of a multi-tier replication scheme and a predictive power-mode scheduler. Instead of randomly placing replicas of each object on a number of nodes in the consistent hash ring, GreenCHT arranges the replicas of objects on non-overlapping tiers of nodes in the ring. This allows the system to enter various power modes by powering down subsets of servers while not violating data availability. The predictive power-mode scheduler predicts workloads and adapts to load fluctuation. It cooperates with the multi-tier replication strategy to provide power proportionality for the system. Accordingly, the performance can be scaled up in a power-proportional manner:the power consumed is proportional to the workload.To ensure that the reliability of the system is maintained when replicas are powered down, a distributed log-store is proposed to distribute the writes to standby replicas to active servers, which ensures failure-tolerance of the system. When a subset of servers are powered down, all of the writes to powered-down replicas are distributed to the active successors in a reliable order, which not only ensures reliability but also maintains write parallelism. In this case, Log-store ensures the reliability of the system in a way that it keeps the level of redundancy even in low power mode. After an unexpected server failure, the system will power up a tier group of servers and start recover:offloaded writes are reclaimed from the log-store. In this case, the system can provide fault-tolerance of R-1 server failures where R is the replica level. In addition, this paper presents the strategies that GreenCHT uses to handle server failures.This paper proposes an endurance-aware write off-loading technique (EWO) for balancing the wear across different flash-based servers with minimal extra cost. EWO exploits the out-of-place update feature of flash memory by off-loading the writes across flash servers instead of moving data across flash servers to mitigate extra-wear cost. To evenly distribute erase cycles to flash servers, EWO off-loads writes out from the flash servers with high erase cycles to the ones with low erase cycles by first quantitatively calculating the amount of writes based on the frequency of garbage collection. To reduce meta-data overhead caused by write off-loading, EWO employs a hot-slice off-loading policy to explore the trade-offs between extra-wear cost and meta-data overhead.
Keywords/Search Tags:CHT, Power Management, Key Value Storage System, Reliability, Wear Bal- ancing
PDF Full Text Request
Related items