The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.01 - Jan.-Feb. (2013 vol.15)
pp: 12-15
Published by the IEEE Computer Society
William J. Feiereisen , Intel Corporation
James D. Myers , Rensselaer Polytechnic Institute
ABSTRACT
The authors consider the obstacles and discuss the importance of integrating the latest techniques of computational science and engineering into digital manufacturing.
The value of computing in science and engineering and the vibrancy of this community are well established. We live in interesting times, where the potential of computing technologies, and the progress being made in their application to a broadening range of science and engineering, is reported regularly in these pages. In discipline after discipline we see that we're gaining the ability to model new phenomena, to accurately predict properties, to model dynamics, and to model systems and systems-of-systems.
The contributions of these advances are exciting not only from a fundamental research perspective—computing is a major tool for advancing our understanding across science and engineering—but also in terms of applications. We're envisioning new materials, new devices, new ways of improving performance and energy efficiency, and new ways of supporting decision making. If you read CiSE, you know there are an endless number of examples that we could use to back these statements. Our world is exciting, fueled by discovery, innovation, and the potential for societal impact.
It's perhaps surprising, then, that the impact of scientific computing on manufacturing, specifically from computing beyond the scale of a desktop computer, is so limited. The potential we see has, in many ways, not yet been realized. Surveys show that very few manufacturing firms employ more than desktop computing. High-performance computing (HPC) centers working with industry, even major ones, can still recite their list of industry collaborations from memory. Yet, in The Facts about Modern Manufacturing (see www.nam.org/System/Capture-Download.aspx?id=0F91A0FBEA1847D087E719EAAB4D4AD8), the National Manufacturers Association estimates that in the US alone there are almost 300,000 manufacturing companies (with the large majority of jobs represented by small and medium enterprises (SMEs) with less than 500 employees). Less than five percent of these companies use scientific computing for design and manufacturing. We're seeing computers and scientific computing techniques that scale to millions of processors. So why is it still news when a company from outside the Fortune 500 uses a small cluster?
What's Slowing Industry Adoption?
Some diffusion time for new techniques to migrate from the lab to industry is to be expected: The future that you can envision from the cutting edge of research takes a while to build. You can also imagine that SMEs, having limited resources, might be slower to adopt capital-intensive techniques such as modeling on large computers. However, the gap we have today is historically wide, while hardware prices have plunged. 1993's top computer is today's iPad, and SMEs can certainly afford resources much more powerful than desktops today. A department-level server with the power of the best supercomputer of the year 2000 can be bought for less than $100,000. If hardware costs aren't the barrier, then what is? There are actually a variety of factors—multiscale and multidisciplinary.
Consider this example: Large companies and laboratories pioneered modeling and simulation and have used them for years. Practical computational fluid dynamics (CFD) and structures modeling were driven by collaborations between government laboratories and leading aerospace and automotive companies in the last century, so CFD and computational structures have, at the high end, been a part of practical product design for decades. However, even such well-proven techniques haven't penetrated SMEs to the same extent, not even to many companies that are in the supply chains of the large companies. Why is it that modeling and simulation aren't the ubiquitous daily tools of all manufacturers?
Numerous reports addressing this question have been issued. In 2008, the Reveal report (see www.compete.org/publications/detail/421/reflect) analyzed the reasons for this seeming failure to adopt HPC. Reveal was based on a survey of 77 companies in the US and found that most (97 percent) were thoroughly versed in "virtual prototyping or large-scale data modeling" on desktop computers, and about half were limited by the computing capability offered by their desktop systems, forcing them to "scale down their advanced problems to fit their desktop computers." A bit more than half of the companies were open to using HPC under the right circumstances. Reveal found three systemic barriers that have stalled "HPC adoption." They are as follows:

    • lack of application software,

    • lack of sufficient talent, and

    • cost constraints.

Other reports echo these themes.
Understanding the Barriers
Although we still talk about barriers to "HPC adoption," the real barriers are the software, expertise, and other factors that raise risks and increase the time and cost to find a solution. The lack of adoption isn't caused by the availability of raw computing cycles.
Even within these categories it's hard to find one issue, one approach that will remove the barrier. Consider software. There are open source codes that scale to the current limits of machine size. There are commercial independent software vendor (ISV) codes that scale to hundreds and thousands of cores. Look more closely, though, and you won't find the ease of use, breadth of specialization, and level of support available for desktop applications. Can a highly scalable CFD application be used for noise, vibration, and harshness analysis? Sure. Can an SME get its design in and answers out with a reasonable amount of effort and with limited wall-clock time? Not without some risk, some help, and perhaps a bit of luck.
Let's talk meshing, workflow, transforming data, and user interfaces. Although many of the open source tools and current ISV offerings could be adapted to fit many of the strategic needs of small manufacturers, SMEs don't have the time and resources to make such investments on faith. HPC is being delivered in kit form today, yet we wonder why only hobbyists are showing up to use it.
The issue of expertise is similarly complex. The expertise to deal with the complexity of parallel codes is rare. There aren't enough engineers who are familiar with the more accurate (less approximate) techniques that can be used on large resources. The people who manage physical prototyping could have a hard time planning and assessing the progress of a digital prototyping effort.
Colleges and universities aren't producing enough scientists and engineers with the broad palette of skills needed to apply HPC in manufacturing. And, as argued in a recent report from the Harvard Business School (see www.gse.harvard.edu/news_events/features/2011/Pathways_to_Prosperity_Feb2011.pdf), we also need to put more effort into post-high-school job-requirement-oriented education. Educational requirements for manufacturing jobs have changed dramatically since their quoted baseline in 1970, a time when two-thirds of production jobs required no more than a high school education. The need isn't for more people with traditional college degrees, but something new—computing in science and engineering for technicians, perhaps.
Reveal's cost constraints issues are also ones that will require "something new" to solve. As noted earlier, the capital cost of hardware isn't the primary issue. Reveal cites the cost of software licenses as more of a barrier. Licenses have often been based on processor or core count, in part as a proxy for per-engineer pricing. Clearly this can make the cost of large parallel runs prohibitive, but how can licenses be changed without cannibalizing revenue from large sites, and why do it now when few customers have the hardware to take advantage of it? ISVs are business savvy and will surely change licensing models as the market changes, but such changes will have impacts across their businesses. Per-hour pricing with no core limit (as CD-adapco offers on some products) lowers the barrier and potentially opens the door to new markets, but how do sales and support adapt to "occasional" users? Setting lower rates for many cores used in one model run, or for parameter sweeps on one base design might work, but the change would again ripple through sales, support, and the software's feature set. Significant changes in license models will await changes in customer demand, and customer demand won't change until the license model does. Welcome to a classic chicken-and-egg problem.
On to Digital Manufacturing
In 2008, Reveal suggested public-private partnerships as a mechanism to solve such issues in parallel—by creating teams that can provide missing expertise, adapt software to SME needs, offer alternate cost models, and get past the "Who moves first?" issues that any single enterprise faces. ( Reveal's authors might have been more prescient than they realized, unwittingly anticipating the decline in public funding that has supported many of the laboratory efforts to address manufacturing HPC problems, which is providing an added push toward public-private models.)
Perhaps these partnerships are the means by which we'll make progress in the near future. In the US, where HPC is increasingly seen as a tool for national competitiveness, digital manufacturing is a theme within advanced manufacturing that could soon see billions in investment (for example, through the Obama administration's proposed Advanced Manufacturing Initiative; see www.digitalmanufacturingreport.com/dmr/2011-06-24/president_obama_launches_advanced_manufacturing_partnership.html). In China, industrial use of HPC is a major driver at many centers. As this interest grows, there are an increasing number of existing and new efforts going beyond the traditional "HPC center with an industrial program" model that are actively working to increase the manufacturing usage of modeling and simulation through public-private partnerships. Their programs aim squarely at the problems we've discussed here, and scaling the use of HPC to a much larger manufacturing market is an explicit goal.
An excellent example is described in this issue of CiSE. Mark Shephard, Cameron Smith, and John Kolb describe efforts to apply HPC in New York State—despite economic and technical impediments—in "Bringing HPC to Engineering Innovation."
We invite you to enjoy this issue and look forward to your comments. We also invite you to consider submitting articles on your own efforts to apply HPC and computing in science and engineering in industry, especially noting your successes in finding ways to address the challenges that are slowing broad adoption and ultimately the growth and evolution of our field.
William J. Feiereisen is a senior scientist at Intel Corporation. His research interests include the high-performance computing aspects of computational fluid dynamics and bioinformatics. Feiereisen has a PhD in mechanical engineering from Stanford University. Contact him at bill@feiereisen.net.
James D. Myers is the associate director for research and development at the Computational Center for Nanotechnology Innovations and a professor of practice in computer science at Rensselaer Polytechnic Institute. His research interests focus on the application of high-performance computing and advanced data-curation mechanisms in education, academic research, and industry. Myers has a PhD in chemistry from the University of California, Berkeley. Contact him at myersj4@rpi.edu.
23 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool