2016 IEEE International Conference on Cluster Computing (CLUSTER) (2016)
Sept. 12, 2016 to Sept. 16, 2016
Online graph analytics and large-scale interactive applications such as social media networks require low-latency data access to billions of small data objects. As a consequence, more and more distributed in-memory systems are proposed allowing to keep all data always in memory. However, these systems also need fault-tolerance mechanisms to mask node failures and power outages. We propose a novel two-level logging architecture with backup-side version control combining faulttolerance with a high throughput when working with many small data objects while having a minimal memory overhead. In order to prevent unlimited growth of logs we use a highly concurrent log cleaning approach. All proposed concepts have been implemented within the DXRAM system and evaluated using the Yahoo! Cloud Serving Benchmark and RAMCloud's Log Cleaner benchmark. The experiments show that our solution has less memory overhead and outperforms state-of-the-art inmemory systems like RAMCloud, Redis, and Aerospike, for the target application domains.
Random access memory, Peer-to-peer computing, Benchmark testing, Throughput, Distributed databases, Memory management, Cleaning
K. Beineke, S. Nothaas and M. Schottner, "High Throughput Log-Based Replication for Many Small In-Memory Objects," 2016 IEEE International Conference on Cluster Computing (CLUSTER), Taipei, Taiwan, 2016, pp. 160-161.