The Community for Technology Leaders

Product Innovation Through Computational Prototypes and Supercomputing

Loren Miller

Pages: 9–17

Abstract—Changing from physical prototype–based product design to computational (virtual) prototype–based product design requires more than leading-edge computational engineering codes, brilliant researchers, and meticulous quantification. The Goodyear Tire & Rubber Company’s experience illustrates a successful, crisis-driven transition to virtual prototype–based product design.

Keywords—CREATE; virtual prototyping; high performance computing; physics-based engineering software; V&V; scientific computing


The flight was routine, but the scenery while landing was spectacular. The approach to the Albuquerque, New Mexico, airport gave clear views of the surrounding desert, the Sandia Peaks to the east, and the ancient volcanoes scattered around the city. The weather was cool and windy without a cloud in the sky, usual for the high desert climate at that time of year.

The source of the Goodyear researchers’ excitement wasn’t just the view, however. The primary reason for their excitement was the anticipation of the next day’s start of a quarterly review of the Cooperative Research and Development Agreement (CRADA) that Goodyear had signed with Sandia National Laboratories, the engineering lab of the US nuclear weapons research complex. Sandia employed about 8,000 of the most talented engineers and scientists in the country, a couple dozen of whom would meet with the Goodyear visitors the next day. The meeting’s agenda began with a review of the progress that had been made since the last quarterly review on the tasks to develop tools for physics-based computational analysis of tire performance. (While this was seemingly a far cry from Sandia’s primary mission, the laboratory director would testify before Congress in 1995, only two years after the CRADA began, that the results of the CRADA had developed “improved computer codes (that) will be used to solve weapon component design problems that previously were intractable.”1)

Early the next morning, the visitors boarded a bus that passed first through Kirtland Air Force Base’s security, then parked outside Sandia’s administration building where the visitors, although already prescreened, proved their identities and were permitted to go through another set of fences and gates to the building where the meetings would be held. Once inside the conference room, they received a safety and security briefing, in which they were reminded, to the surprise of the first-time visitors, that they would be escorted everywhere, even to the restrooms.

By the second day of the review, the Goodyear and Sandia researchers were deeply engaged in describing the progress they had made on their CRADA tasks. That afternoon, two of Sandia’s most senior engineering mechanics code developers declared that, using a new approach that they outlined, they believed that treaded, rolling tire analyses could be solved and turned around within a time frame that would be acceptable for the design process. This was no small potential breakthrough. At that time, the largest tire model ever created by Goodyear’s finite-element analysts had no tread pattern, couldn’t be rolled with camber, caster, or breaking/acceleration forces applied, and required months to create and calculate on state-of-the-art supercomputers with the best mesh generation tools and nonlinear finite-element solvers that were commercially available.

Surprisingly, the proposal by Sandia’s senior developers didn’t generate much comment. At the next break, the Goodyear and Sandia team leaders met to discuss what had just occurred. “Shouldn’t we just have set off some fireworks or something to celebrate?” Goodyear’s team leader asked. In his understated manner, the Sandia leader replied, “Well, we thought it was pretty significant.” They agreed to say nothing in public about the muted reception to the new proposal but wait until the wrapup session in two days and then discuss it with both teams together. Two days later, after the Goodyear team members had had time to process the concept of the new proposal, everyone agreed to pursue the potential breakthrough, and the CRADA tasks were modified to incorporate the Sandians’ ideas.

Two observations stood out from this meeting. First, a potentially radical breakthrough in computational tire analysis had been identified. Second, Goodyear’s researchers had not shown immediate excitement at the time of the proposal. They needed time to process it, realizing that the new proposal would change the priorities and, perhaps, direction of their efforts. Given the dedication, talent, and knowledge of the Goodyear team, all the members agreed to evaluate the new paradigm, but they needed a little time and evidence to commit to a change of direction.

Why did Goodyear decide to fund such a radical proposal and work with a nuclear weapons lab anyway?

The Initial Catalyst for Change

In 1990, the Goodyear Tire & Rubber Company experienced its first annual loss since the Great Depression and the loss of its position as the world’s largest tire company.2 The losses were the aftermath of a failing diversification strategy, a US$2.4 billion oil pipeline that stretched from California to refineries in Texas, and the resulting takeover attempt by a foreign financier, Sir James Goldsmith.3 While Goodyear stopped the takeover attempt, the cost was high, and the pipeline, several years later, was sold at a loss.4

Soon after the takeover attempt, the chairman directed the vice presidents of research and product development to significantly reduce testing costs in the product design process. (In 1994, those costs were estimated at 40 percent5 of an approximately $400 million R&D budget.) A small team was formed to develop proposals to do so. Three alternatives were identified:

Each alternative had its supporters. Three teams were chartered to see what results they could produce.

Improving the efficiency of existing tests seemed the least risky alternative, but it was, in fact, hampered by the nearly 100 years of product development and the accompanying test development that had served the company so well. For years, the company had dedicated significant resources to quality control, total quality management, Six Sigma, theory of constraints, and the fifth discipline, with the result that the product design process, based on physical prototypes, was both well-structured and well-managed. At the same time, however, the physical test–based product design process was complex, time-consuming, and expensive, since tires, depending on the application, were subject to many different and sometimes competing requirements. For passenger tires, as an example, in addition to governmental safety and performance standards and original equipment customer requirements, Goodyear’s own internal performance standards drove the list of potential performance requirements to well over 100. The laboratory and road test organizations were proud of the processes they had developed and their contributions to ensuring product quality and performance. The large number of requirements, however, contributed significantly to the complexities, costs, and duration of new product testing.

Laboratory testing had been evolving over time as well. For example, new test methods and equipment had already brought embedded sensors, remote telemetry, noncontacting optical measurement techniques, anechoic chambers, and force and moment machines into the R&D laboratories. While these and other methods had established themselves as part of the product design process for reducing test expenditures and improving test effectiveness, these new laboratory procedures also proved themselves quite valuable for the validation of computational analyses and their predictions.

Physics-based computational product design was considered the “high risk/high reward” way to reduce testing expenditures. There were good reasons for that perspective. In 1992, the year before Goodyear began its CRADA with Sandia, the state of the art in tire analysis was limited to small models with 90,000 degrees of freedom that left out many critical details of the internal and external structures of the tire, for example, tread patterns.6 The models were created manually by skilled finite-element analysts, each using his or her own methods of creating meshes and applying boundary conditions. Each analyst had his or her own “black book” of material properties. The analysis process itself, using state-of-the-art supercomputers, took months, considerably longer than the actual process of building and testing tires. Furthermore, by the time the analyst ran the model and had answers, the designer generally had forgotten his or her questions. In short, finite-element analysis was useful for a posteriori, not a priori, analyses.

Even with Sandia’s potential breakthrough in computational methods, Goodyear still had to surmount formidable psychological barriers to the use of physics-based computational analysis for product design.

The Resistance to Change Is Often Personal

Most change efforts encounter opposition. For those individuals being asked to change, the emotional cost of switching to a different technical approach can be hard to accept.

The director of physics-based computational analysis knew that the upcoming discussion probably wasn’t going to be pleasant. He was visiting the manager of performance prediction at an overseas technical center. The change under discussion wasn’t a big one. In fact, compared to what was to come, it was rather small. The heuristic design evaluation tool that had been written over the course of several years by that manager had been evaluated against three internal competitors. The manager’s tool had been ranked fourth out of four for its ability to predict product performance. The rating was the unanimous decision of the evaluation team, which included associates from the manager’s own department. The team had considered the option of incorporating some of the features of the manager’s tool in a hybrid product, but the effort to do so had been determined to be too lengthy and without sufficient merit.

After some introductory pleasantries and discussion, the manager posed the question dear to his heart: “Will my design tool be adopted as the corporate standard?” The answer was no. After a brief moment of silence, the manager yelled, picked up the inch-thick stack of papers that was the supporting documentation for his tool, threw it at the director, and then stomped out of his own office and slammed the door.

In retrospect, the manager’s actions shouldn’t have been surprising. While his overt expression of anger was unusual, the underlying reaction to physics-based computational product design from many of the engineers and managers accustomed to creating, ordering, running, and interpreting the experimental tests that had been developed, standardized, and adopted over the course of 100 years was quite predictable. The manager had demonstrated human nature’s resistance to change and, in particular, his pride in his own work products. For some, the switch to virtual prototyping would come only with a significant fear of obsolescence and, in many associates’ minds, should either be stopped or delayed for as long as possible. (Ironically, while some associates may have worried about retaining their jobs, virtual prototyping actually resulted in the hiring of additional technical staff, not staff reductions.)

The impediments to adopting virtual prototyping faced myriad challenges, including negative comments from senior R&D executives designed to denigrate or resist the acceptance of the new computational method:


“If computational analysis is such a good idea, we’d already be doing it.”



“Computational analysis won’t give us a competitive advantage anyway.”



“Just use [insert a commercial finite-element code name] and redeploy your PhD code developers for tech support…”


And finally, from a subsidiary’s well-respected vice president of product development:


“I know modeling and simulation are coming. I just hope I retire before we have to implement them.”


Nicholò Machiavelli pinpointed the key issue involved in making innovative changes centuries earlier in his classic leadership book, The Prince7:


“It must be considered that there is nothing more difficult to carry out, nor more doubtful of success, nor more dangerous to handle, than to initiate a new order of things. For the reformer has enemies in all those who profit by the old order, and only lukewarm defenders in all those who would profit by the new order, this lukewarmness arising partly from fear of their adversaries, who have the laws in their favour; and partly from the incredulity of mankind, who do not truly believe in anything new until they have had actual experience of it.”


Change doesn’t come easily when it involves radical innovation, risk, or loss of personal pride.

Overcoming Fragmented and Nonstandard Analysis Capabilities

Nevertheless, Goodyear’s internal state of the art in physics-based product analysis was changing rapidly. The CRADA with Sandia had “struck gold,” and 250,000 degrees of freedom models were now feasible. At that size, a (simple) tread pattern could be attached to the otherwise smooth, axisymmetric tire model. The tire could be run along the ground, make turns, and incorporate camber, caster, breaking and acceleration, and be subjected to impulse inputs (potholes and curbs). In short, tremendous progress had been made, and the Sandia researchers’ proposal had been shown to work.8 Skepticism and resistance remained within the design and testing communities, however, and, in some cases, the resistance got stronger as the “laughably impossible” computational method began to demonstrate its potential. However, several improvements still had to be made, both to the codes and to the process of analysis:

  • The analyses had to be standardized. When different analysts used differing arrays of meshes, material properties, boundary conditions, and postprocessing approaches, yet claimed to get accurate answers, no one was convinced of the predictive power of analysis. Clearly, the analysts, with the best of intentions, were tuning their analyses to match the results of the experiments that they were modeling. Working with Sandia, automated meshing became possible9 and was implemented.10 The analysts, working with their new colleagues who had been hired to assist in the development of physics-based computational analysis, established standardized sets of material properties, boundary conditions, and postprocessing approaches.
  • Existing computer-aided design systems for treads and tire carcasses were modified and used to provide standardized inputs to the model creation process.
  • The modeling process was standardized and automated to create tire designs, predict tire performance, and display and visualize the results.11 The 2,350x speedup of the Sandia-derived analysis code (compared to the best available commercial code in 2005),12,13 together with the automated analysis system, dramatically reduced the time the designer had to wait for results, thus facilitating the evaluation of more design alternatives. By evaluating more design alternatives, the tire designer’s knowledge and intuitive feel were significantly increased. These benefits contributed to the designer’s ability to perform design optimization,8 which enabled the development and release of more innovative, award-winning products from the company’s “New Product/Innovation Engine.”14,15
  • The mathematics of the codes needed to be verified and the physics validated in order to withstand any lingering scientific skepticism. Uncertainty quantification was used to determine how variations in the input variables changed the results of the analyses.16,17 Analysis results were compared to experiments, and the origins of any discrepancies were determined and corrected. These steps provided value to Sandia as well as to Goodyear as Verification, Validation, and Uncertainty Quantification (VV&UQ) were applied to what became part of Sandia’s Sierra Mechanics Tool Suite.
  • The entire supercomputing ecosystem was standardized, including processors, interconnects, libraries, compilers, and networks. When high-performance computing (HPC) standards or equipment needed to change, large sets of test problems were run to ensure the new configurations of hardware and software gave the same results as had been calculated previously, over a wide range of designs. Daily code builds with nightly runs of difficult problems enabled the identification and correction of problems without exposing them to the designers.

But something critical was still missing.

Replacing Tests with Computational Analysis

Even as these capabilities were being developed and deployed, the substitution of analysis for physical testing was still stalled, and design engineers continued to specify “build/test/repeat.” In particular, with code development initially focusing on “final” product performance metrics such as tread wear or rolling resistance, the designer still had to build and test tires to make the initial decisions that would define the feasible design space. Not only did these physical tests significantly reduce the benefits of virtual prototyping, but it soon became apparent that no experienced designer would risk his or her reputation by eliminating the earlier physical tests and proposing a design based solely on the computation of final performance metrics derived from computational virtual prototypes.

To overcome these concerns, the code development team altered its approach and collaborated with the tire designers and their management to create very detailed workflow diagrams that listed the prototypes, tests, decisions, and the sequence that they followed in the traditional (physical) design process. These workflow diagrams, which filled the walls of a large conference room, were also used to define what computational analyses were required, in what sequence, and at what level of detail. With this information, the code development teams, particularly the one automating the design process, knew what virtual prototypes to create and in what order they should be presented to the product designer. Creating expensive, highly detailed solutions to the final performance metrics, beyond proving feasibility, was insufficient to ensure code usage if the earlier, simplified test requirements had not already been demonstrated to be met in a virtual environment where dozens of designs could be run and analyzed before final design selection and physical production commenced.

As an example, one of the first steps in the physical prototype-based product design process of a unique new tire concept was to build a tire with no tread pattern, inflate it, load it statically, and observe the shape and pressure distribution of the contact patch of the tire (footprint) on the ground. Using this very simple test, key performance characteristics of the internal construction could be determined by comparison to the previous test results of other constructions, and modifications could be made to the design as required. The process would be repeated with successively more complete physical prototypes and more complex tests until the optimum design was developed, a time-consuming and expensive process.

In comparison, using virtual prototyping, the benefit of first doing a very simple computational analysis was that infeasible designs could be eliminated quickly with simplified, easy-to-construct and -run, rapidly converging models. The initial virtual prototypes might require only a few minutes of computation time, much less than the time required to build and test a physical prototype. The very last, most complex virtual test required several days to run, far faster than the several months that the corresponding physical test required. Furthermore, because more design options could be tested virtually than physically, the probability of a failure late in the design process was reduced significantly. The improvement in the probability of success increased the overall time compression and, by following the traditional workflow that had been developed for physical prototypes, the design engineers using virtual prototypes converged quickly on creative and robust designs.

Skepticism Still Abounded

After this change in emphasis by the code development team, agreements were made between the code developers and design management such that, after each incremental set of requirements was defined and met, a specific computational test would be released for general use. Time after time, however, once the specified set of requirements was met, another set of requirements was identified that “still had to be met” before that computational test would be implemented. The resulting delays were exacerbated by the existence of four product design centers, two in the US and two in Europe, whose engineering staffs and managements often disagreed with each other both on priorities and implementation details. The resulting “movement of the goal posts” persisted until a single manager was appointed to supervise all computational test implementation globally. The new manager, while maintaining demanding computational analysis performance standards, implemented virtual prototypes when the virtual and physical tests agreed over a wide range of designs.

Duplicating experimental tests, proceeding from the simplest to the most complex, and making those computational tests available in an automated, standardized, environment that guided the analysis in the sequence already familiar to the designer, was a key to gaining adoption.

Even then, opposition heightened when computational analysis predicted a race tire phenomenon that had never been observed physically. Fortunately, a highly respected experimentalist from the laboratory testing team resolved to employ the latest optical methods to look for the phenomenon. Skepticism turned to a measure of acceptance when the experimental analysis proved that the phenomenon predicted by computational analysis was real.

Although physics-based computational analysis had proven itself capable of providing tremendous insight and utility, designers remained reluctant to switch to virtual prototyping. No one wanted to take the risk of being the first to employ the new method and fail.

Another Crisis Forced the Change

In 2001, another financial crisis hit Goodyear. “For the year, the company lost $204 million, its first annual loss since 1992.”18 Perhaps to underscore how serious the company considered the situation, it published its 2002 Annual Report, “Driving the Turnaround,” on plain paper with no pictures. Soon thereafter, a new CEO resolved to restore dealer confidence and lift revenues with the rapid introduction of a “new, exciting” tire. The innovative tire would have to be introduced at the annual dealers’ conference in less than a year. The company assigned the highly respected inventor of the Aquatred, Goodyear’s most successful new passenger tire introduction to date, to design the “new, exciting” tire. Since physical prototype-based methods of tire development required 3+ years, the lead designer made a courageous decision: “I didn’t know if modeling could meet the challenge, but I did know that the iterative process of building and testing tires couldn’t.”13 Using physics-based computational analyses throughout the design process, the resulting tire, the Assurance, featuring TripleTred Technology, was delivered in time for the dealer conference and became the best-selling new tire introduction in the company’s history. Its exceptional performance on wet, dry, snowy, and icy roads, while still providing excellent treadwear and noise performance, won numerous industry awards and earned Goodyear and Sandia an R&D 100 Award in 2005 both for the new product and for the process by which it had been created. One year later, Goodyear’s implementation of HPC in support of these computational analyses won a CIO 100 Award and was chosen one of the year’s five most innovative applications of IT.

Fortunately for Goodyear, by 2003 the computational analysis processes and codes had matured to the point that the “new, exciting” tire could be designed computationally and undergo physical, final release tests in the extraordinarily short time that was available. After that resounding success, the resistance to computational analysis crumbled and the company converted to virtual prototyping, product line by product line. Together with Goodyear’s internal supercomputing facilities and design process automation efforts, the conversions cut product development time by a factor of four19 and saved approximately $100 million annually in testing costs (a 60 percent reduction).5 As Goodyear’s CTO said in 2009, “Computational analysis tools have completely changed the way we develop tires. They have created a distinct competitive advantage for Goodyear, as we can deliver more innovative new tires to market in a shorter time frame.”20

A radical technology shift was accomplished over a period of a decade in spite of 100+ years of precedent, well-established physical tests and procedures, and people’s natural resistance to radical change. What other companies or industries have adopted virtual prototyping, and did adoption always require a crisis?

Change without Crisis

When Goodyear’s odyssey to virtual prototyping using physics-based computational analysis began in 1992, the tools, procedures, software, HPC hardware, and mindset to embrace physics-based computational product design were all under development and considered “theoretical possibilities” at best. Since then, the use of computational analysis has expanded significantly and been applied to products with many more materials and components. While tires may contain dozens of distinct materials and components, products ranging from refrigerators to engines to automobiles, airplanes, and ships can involve thousands or millions of parts and require the analysis of multiple physics and chemistry regimes. Well-respected major companies, including Airbus,21 Alcoa, Boeing, Dana, Ford, GE Aviation & Energy, P&G, Whirlpool,22 and Porsche,23 have disclosed their use of science-based computational analysis, but few beyond Goodyear have explicitly discussed the replacement of physical tests with virtual prototyping. Notable exceptions are Renault, General Motors, Boeing, Rolls-Royce, and PING:

  • Renault. “Renault has been able to embed a wide range of software for developing appropriate models and tools to reduce the number of physical tests and, by consequence, the cost of development and the overall time to market.”24
  • General Motors. GM notes that crash “simulations are best at reducing the number of repeated tests during the design process, not providing final proof that a car is safe. We do fewer [physical] tests, but we’re evaluating many more requirements. Doing more with less.”25
  • Boeing. “By using supercomputers to simulate the properties of the wings on recent models such as the 787 and the 747-8, we only had to design seven wings, a tremendous savings in time and cost, especially since the price tag for wind tunnel testing has skyrocketed over the past 25 years. The amount of wind tunnel testing has decreased by about 50 percent.”26
  • Rolls-Royce. On its website (https://www.rolls-royce.com/about/our-technology/enabling-technologies/high-performance-computing.aspx), Rolls-Royce notes that “HPC is used primarily for highly parallel Computational [sic] fluid dynamics (CFD) and Finite [sic] element analysis (FEA) applications.” Rolls-Royce goes on to state that “the modeling and simulation technologies have been developed and refined to such an extent as to allow some engine rig tests to be taken out thus reducing cost and risk at product development stage.”
  • PING. PING is a manufacturer of high-end golf equipment in a fiercely competitive market. “Most companies realize about five to 20 percent of their revenues from new products. At PING more than 85 percent of the company’s revenues are generated by new products that have been developed in the last two years. The implication is clear–innovate or die.” Using physics simulation software developed at Lawrence Livermore National Laboratory and a supercomputer, “the system can simulate what happens to the club and the golf ball when the two collide, how the components shift during the stroke, and what happens if different materials are used in the club head and shaft. This approach has allowed us to cut our design cycles from 18 to 24 months to eight or nine months. And in that same shortened cycle, we are able to produce five times more products and make them available to the public. All this with the same staff, same factory, same equipment.”27

While Goodyear and PING were early adopters of virtual prototyping, Renault, General Motors, Boeing, Rolls-Royce, and, surely, many other major corporations are in the process of implementing a key benefit of physics-based computational product design by replacing some of their experimental tests with virtual prototyping.

For small to medium enterprises (SMEs), however, less is known about the extent of the elimination of prototype testing. While a number of successful SME initiatives have been launched in the US and Europe, the extent to which physical prototyping has been reduced or eliminated remains largely undocumented. One exception comes from the efforts of the Korean Institute of Science and Technology Information (KISTI), which began working with SMEs in 1998. In 2004, KISTI initiated a countrywide program joining industry, academia, and supercomputer centers to help Korean SMEs improve profitability and global competitiveness. While KISTI didn’t publicly document the elimination of specific prototype tests, it documented that the average SME using supercomputing technology saved 53.4 percent in development costs and 52.4 percent in development time.28

Lessons Learned

While some researchers were already investigating opportunities in computational engineering analysis, the sustained conversion to virtual prototyping was initiated in response to external pressures.

Executive management direction and support were required to initiate and sustain the change effort. Most associates initially resisted it, and a minority of associates never accepted the change to virtual prototyping.

Quarterly project reviews with executive management (and Sandia) ensured continued focus on deliverables. Short-term deliverables also reinforced the value of the initiative to executive management.

Day-and-a-half visits to Sandia by each CEO ensured their appreciation of the knowledge and dedication of Sandians and the magnitude of their potential contributions. Win/win collaborations were required with external resources, such as Sandia, as well as with associates and internal organizations. Identifying and working with associates who could facilitate such collaborations was critical.

The computational tools needed to explicitly support the product development workflow and be integrated into it. The switch to virtual prototyping required the analysis codes, workflows, IT equipment, material properties, and so on all to be subjected to painstaking VV&UQ. In Goodyear’s case, the conversion to a fully virtual prototype design process required a crisis for implementation. However, hybrid physical/virtual prototype design processes are being adopted by leading corporations.

As the complexity of products increases, from tires to jet engines to airplanes and ships, the complexity of the design process increases accordingly. Nevertheless, for those products or subsystems and those tests that fall within the capabilities of science-based computational analysis and supercomputing, product designers and their organizations now have the opportunity to, step-by-step with much careful VV&UQ, reduce or replace costly and time-consuming experimental tests, thus enabling more creative design approaches, shorter design times, more innovative products, and less expensive product design with greater confidence in the performance of the final product.

Given the competitive nature of global manufacturing and continuing improvements in engineering mechanics software and supercomputing hardware, the expanding use of science-based virtual prototyping for product design is inevitable. However, the key criteria for an organization to consider before starting an initiative still include the following:

  • How important and time-critical is new product innovation to our organization?
  • Are the pertinent science, mathematics, material properties, and so on well enough known that we can select or construct robust and reliable computational prototyping tools?
  • Do we have the expertise to employ those tools?
  • Are we willing to make the necessary investments in people, software, hardware, and workflow documentation and modification?
  • Do we have the resolve, competitive pressure, or perhaps a looming crisis, to make the change happen?

The final step in the transformation from physical to virtual prototyping requires moving past the “incredulity of mankind.”7 After the scientists, engineers, and code developers convince themselves that their virtual prototypes accurately reflect reality, the product designers themselves must believe that virtual prototyping’s benefits exceed its unknown risks.

Acknowledgments

This article is dedicated to the talented and committed technology developers at Goodyear and Sandia who persevered despite the obstacles, to the technology transfer personnel who worked hard to enable it to happen, to the honest brokers who did what they promised, to the lead designer who had the courage to embrace change, and to the CTO who kept the faith. Many thanks to Scott Sundt, Douglass Post, and Richard Kendall of the US Department of Defense’s HPCMP CREATE Program for their comments and suggestions. Assurance, featuring TripleTred Technology, is a trademark of the Goodyear Tire & Rubber Company, 200 Innovation Way, Akron, Ohio 44316.

References



Loren Miller is president of DataMetric Innovations, where he focuses on creating competitive advantage at the intersection of science, engineering, and information technology. He previously worked at Goodyear, where he initiated and led the company’s conversion to virtual prototyping, including the development of the physics-based engineering software, the creation of a global supercomputing environment, and the establishment of a CRADA with Sandia National Laboratories. Miller holds three US patents and received an MS in physics from the University of Akron. Contact him at loren.miller@icloud.com.
FULL ARTICLE
CITATIONS
58 ms
(Ver 3.x)