This Article 
 Bibliographic References 
 Add to: 
LH*G: A High-Availability Scalable Distributed Data Structure By Record Grouping
July/August 2002 (vol. 14 no. 4)
pp. 923-927

LH*g is a high-availability extension of the LH* Scalable Distributed Data Structure. An LH*g file scales up with constant key search and insert performance, while surviving any single-site unavailability (failure). We achieve high-availability through a new principle of record grouping. A group is a logical structure of up to k records, where k is a file parameter. Every group contains a parity record allowing for the reconstruction of an unavailable member. The basic scheme may be generalized to support the unavailability of any number of sites, at the expense of storage and messaging. Other known high-availability schemes are static, or require more storage, or provide worse search performance.

[1] G. Alvarez, W. Burkhard, and F. Cristian, “Tolerating Multiple-Failures in RAID Architecture with Optimal Storage and Uniform Declustering,” Proc. Int'l Symp. Computer Architecture (ISCA-97), 1997.
[2] W. Bennour et al. “Scalable Distributed Linear Hashing$\big. {\rm{LH}}^{\ast}\;_{{\rm{LH}}}\bigr.$Under Windows NT,” Proc. IEEE Fourth World Multiconf. Systems Cybernetics&Informatics and Information Systems Analysis&Synthesis, 2000.
[3] D. Culler, “NOW: Towards Everyday Supercomputing on a Network of Workstations,” EECS technical reprt, UC Berkeley, 1994.
[4] R. Devine, “Design and Implementation of DDH: Distributed Dynamic Hashing,” Proc. Int'l Conf. Foundations of Data Organizations (FODO-93), Oct. 1993.
[5] J. Gray, “Super-Servers: Commodity Computer Clusters Pose a Software Challenge,” Microsoft,http:\\\, 1996.
[6] D. Hsio and D. DeWitt, “Chained Declustering: A New Availability Strategy for Multiprocessor Database Machine,” Proc. Sixth Int'l IEEE Conf. Data Eng., 1990.
[7] J.H. Hartman and J.K. Ousterhout, “The Zebra Striped Network File System,” ACM Trans. Computer Systems, vol. 13, no. 3, pp. 274–310, 1995.
[8] S-O. Hvasshovd et al. “A Continuously Available and Highly Scalable Transaction Server,” Proc. Fourth Int'l Workshop High Performance Transaction Systems, 1991.
[9] J. Karlsson, W. Litwin, and T. Risch, “LH*lh: A Scalable High Performance Data Structure for Switched Multicomputers,” Proc. Int'l Conf. Extending Database Technology (EDBT-96), Mar. 1996.
[10] D. Knuth, The Art of Computer Programming, vol. 3: Sorting and Searching. Addison-Wesley, 1973.
[11] B. Kroll and P. Widmayer, "Distributing a Search Structure Among a Growing Number of Processors," Proc. ACM SIGMOD Conf., pp. 265-276, 1994.
[12] J.-C. Laprie, “Dependable Computing and Fault Tolerance: Concepts and Terminology,” Proc. 15th Int'l Symp. Fault-Tolerant, pp. 2-11, 1985.
[13] R. Lindberg, “A Java Implementation of a Highly Available Scalable and Distributed Data Structure LH*g,” master's thesis, LiTH-IDA-Ex-97/65, Univ. of Linkoping, p. 62, 1997.
[14] W. Litwin, J. Menon, and T. Risch, “Design Issues For Scalable Availability LH* Schemes with Record Grouping,” Proc. Workshop Distributed Data and Structures (DIMACS), J. Menon, T. Risch, and T. Schwarz, eds., 1999.
[15] W. Litwin, M.-A. Neimat, and D. Schneider, “LH*: Linear Hashing for Distributed Files,” Proc. ACM-SIGMOD Int'l Conf. Management of Data, 1993.
[16] W. Litwin, M.-A. Neimat, and D. Schneider, “RP*: A Family of Order-Preserving Scalable Distributed Data Structures,” Proc. 20th Int'l Conf. Very Large Data Bases (VLDB), 1994.
[17] A. Nanda and L.M. Ni, “Benchmark Workload Generation and Performance Characterization of Multiprocessors,” Proc. Supercomputing '92, pp. 20-29, Nov. 1992.
[18] W. Litwin and M.-A. Neimat, “$\big. k{\hbox{-}}{\rm{RP}}^{\ast}\;_{{\rm{N}}}\bigr.$: A High Performance Multi-Attribute Scalable Distributed Data Structure,” Proc. IEEE Int'l Conf. Parallel and Distributed Information Systems, 1996.
[19] W. Litwin and M.-A. Neimat, “High-Availability LH* Schemes with Mirroring,” Proc. Int'l Conf. Cooperating Information Systems, June 1996.
[20] W. Litwin, M.-A. Neimat, G. Levy, S. Ndiaye, and T. Seck, “$\big. {\rm{LH}}^{\ast}\;_{{\rm{S}}}\bigr.$: A High-Availability and High-Security Scalable Distributed Data Structure,” IEEE-Research Issues in Data Eng., (RIDE-97), 1997.
[21] W. Litwin and T. Risch, “LH*g : A High-Availability Scalable Distributed Data Structure by Record Grouping,” research report, Univ. of Paris 9 and Univ. of Linkoping, Apr. 1997, http:/
[22] W. Litwin and T. Schwarz, “$\big. {\rm{LH}}^{\ast}\;_{{\rm{RS}}}\bigr.$: A High-Availability Scalable Distributed Data Structure Using Reed Solomon Codes,” Proc. ACM-SIGMOD-2000 Int'l Conf. Management of Data, 2000.
[23] D.A. Patterson, G. Gibson, and R.H. Katz, “A Case for Redundant Arrays of Inexpensive Disks (RAID),” Proc. ACM SIGMOD Conf., pp. 109–116, 1988.
[24] M. Stonebraker and G.A. Schloss, “Distributed RAID—A New Multiple Copy Algorithm,” Proc. Sixth Int'l Conf. Data Eng., pp. 430-437, Feb. 1990.
[25] A. Tanenbaum, Distributed Operating Systems. Prentice-Hall, 1995.
[26] O. Torbjornsen, “Multi-Site Declustering Strategies for Very High Database Service Availabiity,” Thesis Norges Technical Hogskoule, IDT Report 1995.2, 1995.
[27] S. Tung, H. Zha, and T. Kefe, “Concurrent Scalable Distributed Data Structures,” Proc. ISCA Int'l Conf. Parallel and Distributed Computing Systems, K. Yetongnon and S. Harini, eds., pp. 131-136, Sept. 1996.
[28] R. Vingralek, Y. Breitbart, and G. Weikum, “Distributed File Organization with Scalable Cost/Performance,” Proc. ACM-SIGMOD Int'l Conf. Management of Data, 1994.
[29] J. Wilkes, R. Golding, C. Staelin, and T. Sullivan, The HP Auto RAID Hierarchical Storage System ACM Trans. Computer Systems, vol. 14, pp. 108-136, Feb. 1996.

Index Terms:
Scalability, distributed systems, distributed data structures, high-availability, fault tolerance, parallelism, multicomputers.
Witold Litwin, Tore Risch, "LH*G: A High-Availability Scalable Distributed Data Structure By Record Grouping," IEEE Transactions on Knowledge and Data Engineering, vol. 14, no. 4, pp. 923-927, July-Aug. 2002, doi:10.1109/TKDE.2002.1019223
Usage of this product signifies your acceptance of the Terms of Use.