This Article 
 Bibliographic References 
 Add to: 
An Adaptive Interleaving Technique for Memory Performance-per-Watt Management
July 2009 (vol. 20 no. 7)
pp. 1011-1022
Bithika Khargharia, University of Arizona, Tucson
Salim Hariri, University of Arizona, Tucson
Mazin S. Yousif, Corporate Technology Group, Intel, Portland
With the increased complexity of platforms coupled with data centers' servers sprawl, power consumption is reaching unsustainable limits. Researchers have addressed data centers' performance-per-watt management at different hierarchies going from server clusters to servers to individual components within the server platform. This paper addresses performance-per-watt maximization of memory subsystems in a data center. Traditional memory power management techniques rely on profiling the utilization of memory modules and transitioning them to some low-power mode when they are sufficiently idle. However, fully interleaved memory presents an interesting research challenge because data striping across memory modules reduces the idleness of individual modules to warrant transitions to low-power states. In this paper, we present a novel technique for performance-per-watt maximization of interleaved memory by dynamically reconfiguring (expanding or contracting) the degree of interleaving to adapt to incoming workload. The reconfigured memory hosts the application's working set on a smaller set of modules in a manner that exploits the platform's memory hierarchy architecture. This creates the opportunity for the remaining memory modules to transition to low-power states and remain in those states for as long as the performance remains within given acceptable thresholds. The memory power expenditure is minimized subject to application memory requirements and end-to-end memory access delay constraints. This is formulated as a performance-per-watt maximization problem and solved using an analytical memory power and performance model. Our technique has been validated on a real server using SPECjbb benchmark and on a trace-driven memory simulator using SPECjbb and gcc memory traces. On the server, our techniques are shown to give about 48.8 percent (26.7 kJ) energy savings compared to traditional techniques measured at 4.5 percent. The maximum improvement in performance-per-watt was measured at 88.48 percent. The simulator showed 89.7 percent improvement in performance-per-watt compared to the best performing traditional technique.

[1] A.R. Lebeck, X. Fan, H. Zeng, and C. Ellis, “Power Aware Page Allocation,” Proc. Ninth Int'l Conf. Architectural Support for Programming Languages and Operating Systems (ASPLOS '00), pp.105-116, Nov. 2000.
[2] Rambus, RDRAM, http:/, 1999.
[3] DDR2 FBDIMM Technical Product Specifications, DDR_DDR2/DDR2SDRAM/Module/FBDIMM/ M395T2953CZ4ds_512mb_ c_die_based_fbdimm_rev13.pdf , 2006.
[4] SPECjbb2005, html , 2006.
[5] X. Fan, C. Ellis, and A.R. Lebeck, “Memory Controller Policies for DRAM Power Management,” Proc. Int'l Symp. Low Power Electronics and Design (ISLPED '01), pp. 129-134, Aug. 2001.
[6] P. Zhou, V. Pandey, J. Sundaresan, A. Raghuraman, Y. Zhou, and S. Kumar, “Dynamic Tracking of Page Miss Ratio Curve for Memory Management,” Proc. 11th Int'l Conf. Architectural Support for Programming Languages and Operating Systems (ASPLOS '04), pp. 177-188, Oct. 2004.
[7] D. Wang, B. Ganesh, N. Tuaycharoen, K. Baynes, A. Jaleel, and B. Jacob, “DRAMsim: A Memory-System Simulator,” SIGARCH Computer Architecture News, vol. 33, no. 4, pp. 100-107, Nov. 2005.
[8] V. De La Luz, M. Kandemir, N. Vijaykrishnan, A. Sivasubramaniam, and M.J. Irwin, “Hardware and Software Techniques for Controlling DRAM Power Modes,” IEEE Trans. Computers, vol. 50, no. 11, pp. 1154-1173, Nov. 2001.
[9] V. De La Luz, A. Sivasubramaniam, M. Kandemir, N. Vijaykrishnan, and M.J. Irwin, “Scheduler-Based DRAM Energy Management,” Proc. 39th Design Automation Conf. (DAC'02), p. 697, June 2002.
[10] H. Huang, P. Pillai, and K.G. Shin, “Design and Implementation of Power-Aware Virtual Memory,” Proc. USENIX Technical Conf., pp.57-70, June 2003.
[11] H. Huang, C. Lefurgy, T. Keller, and K.G. Shin, “Improving Energy Efficiency by Making DRAM Less Randomly Accessed,” Proc. Int'l Symp. Low Power Electronics and Design (ISLPED '05), pp.393-398, Aug. 2005.
[12] V. De La Luz, M. Kandemir, and I. Kolcu, “Automatic Data Migration for Reducing Energy Consumption in Multi-Bank Memory Systems,” Proc. 39th Design Automation Conf. (DAC '02), pp. 213-218, June 2002.
[13] X. Li, Z. Li, F. David, P. Zhou, Y. Zhou, S. Adve, and S. Kumar, “Performance-Directed Energy Management for Main Memory and Disks,” Proc. 11th Int'l Conf. Architectural Support for Programming Languages and Operating Systems (ASPLOS '04), Oct. 2004.
[14] B. Diniz, D. Guedes, W. Meira Jr., and R. Bianchini, “Limiting the Power Consumption of Main Memory,” Proc. 34th Ann. Int'l Symp. Computer Architecture (ISCA '07), June 2007.
[15] D. Bovet and M. Cesati, Understanding the Linux Kernel, pp. 294-342. O'Reilly, 2002.
[16] T. Hastie, R. Tibshirani, and J.H. Friedman, The Elements of Statistical Learning, pp. 41-75. Springer, Aug. 2001.
[17] S. Ghanbari, G. Soundararajan, J. Chen, and C. Amza, “Adaptive Learning of Metric Correlations for Temperature-Aware Database Provisioning,” Proc. Fourth Int'l Conf. Autonomic Computing (ICAC '07), June 2007.
[18] A Tutorial on Clustering Algorithms, tutorial_htmlkmeans.html, 2007.
[19] Co-Efficient of Determination $-{\rm r}^{2}$ , good3.html, 2002.
[20] B. Khargharia, S. Hariri, and M. Yousif, “Self-Optimization of Performance-per-Watt for Interleaved Memory Systems,” Proc. IEEE Int'l Conf. High Performance Computing (HiPC '07), Dec. 2007.

Index Terms:
Application-aware adaptation, energy-aware systems, interleaved memory, modeling and prediction, optimization.
Bithika Khargharia, Salim Hariri, Mazin S. Yousif, "An Adaptive Interleaving Technique for Memory Performance-per-Watt Management," IEEE Transactions on Parallel and Distributed Systems, vol. 20, no. 7, pp. 1011-1022, July 2009, doi:10.1109/TPDS.2008.136
Usage of this product signifies your acceptance of the Terms of Use.