SCIRP Mobile Website
Paper Submission

Why Us? >>

  • - Open Access
  • - Peer-reviewed
  • - Rapid publication
  • - Lifetime hosting
  • - Free indexing service
  • - Free promotion service
  • - More citations
  • - Search engine friendly

Free SCIRP Newsletters>>

Add your e-mail address to receive free newsletters from SCIRP.

 

Contact Us >>

WhatsApp  +86 18163351462(WhatsApp)
   
Paper Publishing WeChat
Book Publishing WeChat
(or Email:book@scirp.org)

Article citations

More>>

Wu, J., Hua, Y., Zuo, P. and Sun, Y. (2018) Improving Restore Performance in Deduplication Systems via a Cost-Efficient Rewriting Scheme. IEEE Transactions on Parallel and Distributed Systems, 30, 119-132.
https://doi.org/10.1109/TPDS.2018.2852642

has been cited by the following article:

  • TITLE: dCACH: Content Aware Clustered and Hierarchical Distributed Deduplication

    AUTHORS: Girum Dagnaw, Ke Zhou, Hua Wang

    KEYWORDS: Clustered Deduplication, Content Aware Grouping, Hierarchical Deduplication, Stateful Routing, Similarity Bloom Filters

    JOURNAL NAME: Journal of Software Engineering and Applications, Vol.12 No.11, November 27, 2019

    ABSTRACT: In deduplication, index-lookup disk bottleneck is a major obstacle which limits the throughput of backup processes. One way to minimize the effect of this issue and boost speed is to use very high course-grained chunks for deduplication at a cost of low storage saving and limited scalability. Another way is to distribute the deduplication process among multiple nodes but this approach introduces storage node island effect and also incurs high communication cost. In this paper, we explore dCACH, a content-aware clustered and hierarchical deduplication system, which implements a hybrid of inline course grained and offline fine-grained distributed deduplication where routing decisions are made for a set of files instead of single files. It utilizes bloom filters for detecting similarity between a data stream and previous data streams and performs stateful routing which solves the storage node island problem. Moreover, it exploits the negligibly small amount of content shared among chunks from different file types to create groups of files and deduplicate each group in their own fingerprint index space. It implements hierarchical deduplication to reduce the size of fingerprint indexes at the global level, where only files and big sized segments are deduplicated. Locality is created and exploited first using the big sized segments deduplicated at the global level and second by routing a set of consecutive files together to one storage node. Furthermore, the use of bloom filter for similarity detection between streams has low communication and computation cost while it enables to achieve duplicate elimination performance comparable to single node deduplication. dCACH is evaluated using a prototype deployed on a server environment distributed over four separate machines. It is shown to have 10× the speed of Extreme_Binn with a minimal communication overhead, while its duplicate elimination effectiveness is on a par with a single node deduplication system.