This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
SPEK: A Storage Performance Evaluation Kernel Module for Block-Level Storage Systems under Faulty Conditions
April-June 2005 (vol. 2 no. 2)
pp. 138-149
Xubin He, IEEE
Ming Zhang, IEEE
This paper introduces a new benchmark tool, SPEK (Storage Performance Evaluation Kernel module), for evaluating the performance of block-level storage systems in the presence of faults as well as under normal operations. SPEK can work on both Direct Attached Storage (DAS) and block level networked storage systems such as storage area networks (SAN). Each SPEK consists of a controller, several workers, one or more probers, and several fault injection modules. Since it runs at kernel level and eliminates skews and overheads caused by file systems, SPEK is highly accurate and efficient. It allows a storage architect to generate configurable workloads to a system under test and to inject different faults into various system components such as network devices, storage devices, and controllers. Available performance measurements under different workloads and faulty conditions are dynamically collected and recorded in SPEK over a spectrum of time. To demonstrate its functionality, we apply SPEK to evaluate the performance of two direct attached storage systems and two typical SANs under Linux with different fault injections. Our experiments show that SPEK is highly efficient and accurate to measure performance for block-level storage systems.

[1] P.M. Chen, E.K. Lee, G.A. Gibson, R.H. Katz, and D.A. Patterson, “RAID: High-Performance, Reliable Secondary Storage,” ACM Computing Surveys, vol. 26, no. 2, pp. 145-188, June 1994.
[2] M. Malhotra and K. Trivedi, “Reliability Analysis of Redundant Arrays of Inexpensive Disks,” J. Parallel and Distributed Computing, vol. 17, pp. 146-151, 1993.
[3] J. Ward, M. O'Sullivan, T. Shahoumian, and J. Wilkes, “Appia: Automatic Storage Area Network Fabric Design,” Proc. Conf. File and Storage Technologies (FAST), pp. 203-217, Jan. 2002.
[4] J. Katcher, “PostMark: A New File System Benchmark,” Technical Report 3022, Network Appliance, 1997.
[5] D. Capps and W.D. Norcott, “Iozone Filesystem Benchmark,” http:/www.iozone.org/, 2004.
[6] R. Coker, “Bonnie++ Benchmark Tool,” http://www.coker.com. aubonnie++/, 2004.
[7] Intel Iometer, “Performance Analysis Tool,” http://www.intel. com/design/servers/devtools iometer/, 2004.
[8] K.A. Smith and M.I. Seltzer, “File System Aging— Increasing the Relevance of File System Benchmarks,” Proc. 1997 ACM SIGMETRICS Int'l Conf. Measurement and Modeling of Computer Systems, pp. 203-213, June 1997.
[9] A. Goyal, S. Lavenberg, and K. Trivedi, “Probabilistic Modeling of Computer System Availability,” Annals of Operations Research, vol. 8, 1987.
[10] O. Ibe, R. Howe, and K. Trivedi, “Approximate Availability Analysis of Vaxcluster Systems,” IEEE Trans. Reliability, vol. 38, no. 1, pp. 146-152, 1989.
[11] J. Muppala and K. Trivedi, “Composite Performance and Availability Analysis Using a Hierarchy of Stochastic Reward Nets,” Proc. Fifth Int'l Conf. Modeling Techniques and Tools for Computer Performance Evaluation, Feb. 1991.
[12] Y. Ma, J. Han, and K. Trivedi, “Composite Performance and Availability Analysis of Wireless Communication Networks,” IEEE Trans. Vehicular Technology, vol. 50, no. 5, pp. 1216-1223, 2001.
[13] A. Brown and D.A. Patterson, “Towards Availability Benchmarks: A Case Study of Software RAID Systems,” Proc. 2000 USENIX Ann. Technical Conf., pp. 263-276, June 2000.
[14] UNH, iSCSI Reference Implementation, http://www.iol.unh. edu/consortiumsiscsi /, 2004.
[15] X. He, M. Zhang, and Q. Yang, “STICS: SCSI-to-IP Cache for Storage Area Networks,” J. Parallel and Distributed Computing, vol. 64, no. 9, pp. 1069-1085, 2004.
[16] SCSI Block Commands, NCITS Working Draft Proposed Standard, Rev. 8c, 1997, http://www.t10.orgscsi-3.htm.
[17] J. Arlat, A. Costes, Y. Crouzet, J.-C. Laprie, and D. Powell, “Fault Injection and Dependability Evaluation of Fault-Tolerant Systems,” IEEE Trans. Computers, vol. 42, no. 8, pp. 913-923, 1993.
[18] S. Dawson, F. Jahanian, and T. Mitton, “ORCHESTRA: A Fault Injection Environment for Distributed Systems,” Technical Report CSE-TR-318-96, Univ. of Michigan, 1996.
[19] D. Pradhan, Fault-Tolerant Computer System Design. Prentice Hall, 1996.
[20] X. Li, R. Martin, K. Nagaraja, T. Nguyen, and B. Zhang, “Mendosus: A SAN-Based Fault-Injection Test-Bed for the Construction of Highly Available Network Services,” Proc. First Workshop Novel Uses of System Area Networks (SAN-1), Feb. 2002.
[21] M.L. Shooman, Reliability of Computer Systems and Networks: Fault Tolerance, Analysis, and Design. John Wiley and Sons, 2002.
[22] K. Nagaraja, X. Li, R. Bianchini, R. Martin, and T. Nguyen, “Using Fault Injection and Modeling to Evaluate the Performability of Cluster-Based Services,” Proc. USENIX Symp. Internet Technologies and Systems, 2003.
[23] L. Rizzo, “Dummynet: A Simple Approach to the Evaluation of Network Protocols,” ACM Computer Comm. Rev., vol. 27, no. 1, pp. 31-41, 1997.
[24] W.T. Ng, B. Hillyer, E. Shriver, E. Gabber, and B. Ozden, “Obtaining High Performance for Storage Outsourcing,” Proc. Conf. File and Storage Technologies (FAST), pp. 145-158, Jan. 2002.
[25] J.L. Griffin, J. Schindler, S.W. Schlosser, J.S. Bucy, and G.R. Ganger, “Timing-Accurate Storage Emulation,” Proc. Conf. File and Storage Technologies (FAST), pp. 75-88, Jan. 2002.
[26] DTB: Linux Disk Trace Buffer, Performance Evaluation Laboratory, Brigham Young University, http://traces.byu.edu/newTools/, 2004.
[27] SPC, Storage Performance Council I/O Traces, http:/www. storageperformance.org/, 2004.
[28] R.V. Meter, “Observing the Effects of Multi-Zone Disks,” Proc. 1997 USENIX Ann. Technical Conf., Jan. 1997.
[29] Dell/EMC, CX200 RAID Storage System, http:/www.dell.com/, 2004.
[30] A. Brown and M. Seltzer, “Operating System Benchmarking in the Wake of Lmbench: A Case Study of the Performance of Netbsd on the Intel X86 Architecture,” Proc. 1997 ACM SIGMETRICS Conf. Measurement and Modeling of Computer Systems, pp. 214-224, June 1997.
[31] J. Satran, K. Meth, C. Sapuntzakis, M. Chadalapaka, and E. Zeidner, “iSCSI Draft Standard,” http://www.ietf.org/internet-draftsdraft-ietf-ips-iscsi-20.txt , 2004.
[32] K. Voruganti and P. Sarkar, “An Analysis of Three Gigabit Networking Protocols for Storage Area Networks,” Proc. 20th IEEE Int'l Performance, Computing, and Comm. Conf., Apr. 2001.
[33] K. Meth, “iSCSI Initiator Design and Implementation Experience,” Proc. 19th IEEE Symp. Mass Storage Systems, Apr. 2002.
[34] A. Heddaya and A. Helal, “Reliability, Availability, Dependability and Performability: A User-Centered View,” Technical Report BU-CS-97-011, Computer Science Dept., Boston Univ., Dec. 1996.
[35] K. Nagaraja, N. Krishnan, R. Bianchini, R. Martin, and T. Nguyen, “Evaluating the Impact of Communication Architecture on the Performability of Cluster-Based Services,” Proc. Ninth Int'l Symp. High-Performance Computer Architecture (HPCA 9), Feb. 2003.
[36] J. Meyer, “On Evaluating the Performability of Degradable Computing Systems,” IEEE Trans. Computers, vol. 29, no. 8, pp. 720-731, 1980.
[37] J. Meyer, “Performability: A Retrospective and Some Pointers to the Future,” Performance Evaluation, vol. 14, nos. 3-4, pp. 139-156, 1992.
[38] G. Alvarez, M. Uysal, and A. Merchant, “Efficient Verification of Performability Guarantees,” Proc. Fifth Int'l Workshop Performability Modeling of Computer and Comm. Systems, Sept. 2001.
[39] R. Smith, K. Trivedi, and A. Ramesh, “Performability Analysis: Measures, an Algorithm, and a Case Study,” IEEE Trans. Computers, vol. 37, no. 4, pp. 406-417, 1988.
[40] K. Trivedi, Probability and Statistics with Reliability, Queuing, and Computer Science Applications. John Wiley and Sons, 2001.
[41] A. Park and J.C. Becker, “IOStone: A Synthetic File System Benchmark,” Computer Architecture News, vol. 18, no. 2, pp. 45-52, June 1990.
[42] M. Wittle and B.E. Keith, “LADDIS: The Next Generation in NFS File Server Benchmarking,” Proc. USENIX Association Conf., Apr. 1993.
[43] SPEC, SPEC SFS benchmark, http://www.spec.org/osgsfs97/, 2003.
[44] VeriTest, Netbench File System Benchmark, http://www. etestinglabs.com/benchmarks/ netbenchnetbench.asp, 2004.
[45] K. Magoutis, S. Addetia, A. Fedorova, M. Seltzer, J. Chase, A. Gallatin, R. Kisley, R. Wickremesinghe, and E. Gabber, “Structure and Performance of the Direct Access File System (DAFS),” Proc. USENIX 2002 Ann. Technical Conf., pp. 1-14, June 2002.
[46] R. Carter, B. Ciotti, S. Fineberg, and B. Nitzberg, “NHT-1 I/O Benchmarks,” Technical Report RND-92-016, NAS Systems Division, NASA Ames, Nov. 1992.
[47] H. Simitci and D.A. Reed, “A Comparison of Logical and Physical Parallel I/O Patterns,” The Int'l J. Supercomputer Applications and High Performance Computing, vol. 12, no. 3, pp. 364-380, Fall 1998.
[48] Y. Zhu and Y. Hu, “Can Large Disk Built-In Caches Really Improve System Performance?” Technical Report 259, Univ. of Cincinnati, 2002.
[49] E. Zadok and J. Nieh, “FiST: A Language for Stackable File Systems,” Proc. 2000 USENIX Ann. Technical Conf., June 2000.

Index Terms:
Index Terms- Measurement techniques, performance analysis, degraded performance, data storage, disk I/O.
Citation:
Xubin He, Ming Zhang, Qing (Ken) Yang, "SPEK: A Storage Performance Evaluation Kernel Module for Block-Level Storage Systems under Faulty Conditions," IEEE Transactions on Dependable and Secure Computing, vol. 2, no. 2, pp. 138-149, April-June 2005, doi:10.1109/TDSC.2005.27
Usage of this product signifies your acceptance of the Terms of Use.