Mazin Yousif, PhD
Royal Dutch Shell CTO
T-Systems, International
Phone: +1 503 317 3123
Email: myousif100@gmail.com


DVP term expires 2013


Dr. Yousif graduated in 1979 from the University of Baghdad, Iraq, with a B.Sc. (Honors) in Electrical Engineering. He worked for several years in industry before leaving Iraq to pursue his graduate studies in the United States. Mazin received his Masters in Electrical Engineering and PhD in Computer Engineering from the Pennsylvania State University in 1987 and 1992, respectively.

Dr. Yousif is currently the Chief Technology Officer for the Royal Dutch Shell Global Account at T-Systems, International. Before that, he was the CTO for Cloud Computing, in IBM Canada; served as the chief systems architect for Phase Change Memory at Numonyxc Corporation. He was also a Principal Engineer and Director of the Scale-out Virtualization and Autonomics project – first cloud infrastructure built in 2003 - in the Corporate Technology Group at Intel Corporation in Hillsboro, Oregon.  He was also a member of many research committees. Prior to that, he was one of the principal architects defining the InfiniBand Architecture. From 1995-2000, he was a senior architect with the xSeries Division of IBM Corporation, RTP, NC. From 1993-1995, Dr. Yousif was an Assistant Professor at Louisiana Tech University.

Dr. Yousif held adjunct Professors positions at a number of universities including Duke, NCSU and OGI. His current focus is on enabling cloud technologies, energy optimization and Big Data with a focus on setting the R&D directions for them.

 

Cloud Computing - a Paradigm IT Changer
Cloud Computing is an emerging computing paradigm envisioned to change all IT landscape facets including technology, business, services and human resources. It is a consumer/delivery model that offers IT capabilities as services billed based on usage. Many such cloud services can be envisioned, but the main ones are IaaS (Infrastructure-as-a-Service), PaaS (Platform-as-a-Service), and SaaS (Software-as-a-Service). The underlying cloud architecture includes a pool of virtualized compute, storage and networking resources that can be aggregated and launched as platforms to run workloads and satisfy their Service-Level Agreement (SLA). Cloud architectures also include provisions to best guarantee service delivery for clients and at the same time optimize efficiency of resources of providers. Examples of provisions include, but not limited to, elasticity through scaling resources up/down to track workload behavior, extensive monitoring, failure mitigation, and energy optimizations. The two main technologies enabling clouds are: (i) Virtualization, the foundation of clouds; and (ii) manageability (autonomics), the command & control of clouds.

This talk is intended to provide an overview of cloud computing, its enabling technologies and current challenges. It will also look at clouds' IT/business ramifications as well as required future research.

Power Management in Servers – A Memory Focus
With the increased complexity of platforms coupled with data centers' servers sprawl, power consumption is reaching unsustainable limits. Memory is an important target for platform-level energy efficiency. Most memory power management techniques exploit the multiple power-state memory modules and transition them to low-power states when they are sufficiently idle – a rarity in fully-interleaved memory since data is striped across all memory modules. This talk introduces a novel technique that dynamically adapts the degree of memory interleaving to incoming workload. The reconfigured memory hosts the application's working set on a smaller set of memory modules in a manner that exploits the internal memory architecture. This technique saves power while maintaining E2E memory access delay and the applications' miss ratio through a performance-per-watt maximization solutions, which are validated on real hardware and on tracedriven memory simulations. The proposed technique yields energy savings of about 48.8 % (26.7 kJ) compared to traditional techniques measured at 4.5%. The improvement in performance-per-watt reached a maximum of 88.48% during the entire duration of application (SPECjbb2005) execution. On the simulator our technique yields 48% savings compared to traditional techniques measured at 4.75%.

Platform Design Considerations with Multi-core Processors
Higher performance/power efficiencies are being reflected in the form of many cores on one processor chip resulting in interesting set of architectural challenges both within the chip and the platform. One primary design consideration is the type and complexity of cores integrated in the processor given the plethora of processor architectures (scalar, ILP, TLP, in-order, out of order, etc.). Similarly, the appropriate cache architecture and the underlying network-on-Chip connecting the caches and cores are design considerations. Others include circuit implementations given certain number of metal/silicon layers, die area size, power budget, and technology. Challenges include those related to power/thermal, performance, and reliability. Similar design considerations exist at the platform level. The primary and most impactful is the memory system architecture supplying enough sustainable memory bandwidth to feed the many cores. Beside bandwidth sustainability, the memory architecture must be optimized for shortest access latency preferably through single hop interconnects. The architecture of network and storage I/O subsystems is very critical to develop well-balanced compute-I/O (network and storage) platforms. Software challenges covering OS, compiler, applications as well as runtime-execution environment including synchronization granularity, locking, and scheduling are even more pronounced. Other challenges are mechanical-related covering packaging, thermal and power; providing enough hooks to effectively manage the platform and possibly partitioning and virtualizing platform resources. This talk will run through a number of the above mentioned design considerations and look at challenges when deploying many-core-processor-based platforms in the enterprise.