Architectural Risk Assessment and System Performance More Tightly Linked Than Previously Thought
By Lori Cameron
Share this on:
The emergence of new chip technologies, memory technologies, and computing devices/paradigms requires new ways of assessing architectural risk.
While risk assessment and management are typically treated as separate from issues of performance, they are more tightly linked than one might expect, say Weilong Cui and Timothy Sherwood, authors of “Architectural Risk” in the May/June 2018 issue of IEEE Micro.
“Architectural risk, intuitively, is the degree to which the performance of a design is fragile in the face of unknowns. In many industrial settings, high-level architectural design decisions are made at the level of spreadsheets and other high-level analytical models or data points drawn from experience. Unlike in software, operating systems, and device modeling, most do not consider the uncertainty in the assumptions being made nor the fragility of the decisions with respect to those uncertainties. Here, we concentrate on such analytical models of architecture,” they say.
Given a few data points, the authors first test whether a dataset can be transformed to normality through the Box-Cox test (a way to transform non-normal dependent variables into a normal, well-organized shape).
Below is a chart depicting front-end workflow for automatic risk quantification.
Below is a chart depicting back-end workflow for automatic risk quantification.
Below are core configurations of performance-optimal designs for LPHC (low parallelism and high communication overhead).
As application uncertainty grows, more asymmetric configurations are preferred, However, as architecture uncertainty grows, symmetric configurations are generally favored more.
These are just a few examples of the new concepts and tools the authors propose for dealing quantitatively with uncertainty at early design cycles.
“The goal of this new line of work is to promote a new first-order design concern (architectural risk) and to provide a systematic framework to quantify such risks, with the aim of helping find designs that are more robust to the impacts of uncertainty than performance-only optimal designs while still maintaining very strong performance in the common case,” the authors say.
Research related to computer architecture and risk management in the Computer Society Digital Library:
Lori Cameron is a Senior Writer for the IEEE Computer Society and currently writes regular features for Computer magazine, Computing Edge, and the Computing Now and Magazine Roundup websites. Contact her at firstname.lastname@example.org. Follow her on LinkedIn.