Visual Computing Challenges of Advanced Manufacturing and Industrie 4.0

André Stork, Technische Universität Darmstadt

Pages: 21–25

Abstract—The (r)evolution in Industrie 4.0 is being accelerated by the wide adoption of networking and Internet technology into traditional industries such as manufacturing shop floors aiming at CPPS for future factories. Furthermore, advances in additive manufacturing, modeling systems, physics-based simulation, and the computational representation of materials has created a framework to enable new intelligent factories characterized by adaptability, resource efficiency, and ergonomics. This special issue serves to familiarize the computer graphics community with the Industrie 4.0 and Industrial Internet initiatives and mobilize more computer graphicists to initiate research and application development in this emerging and attractive area.

Keywords—computer graphics; Industrie 4.0; Industrial Internet; manufacturing applications; visual computing; computer vision; 3D modeling


Two to three years ago, I heard about Industrie 4.0, Industrial Internet, and cyber-physical production systems (CPPS) for the first time. Although there are many general information and communication technologies (ICTs) involved (such as protocols and interoperability on the production machine level), as a computer graphicist, I started to wonder about the role of visual computing (computer graphics and computer vision) in these efforts.

Industrie 4.0 and the Industrial Internet

Over the past few years, there has been a clear trend toward more individualized and customized products that require increasing flexibility in production. As a result of new manufacturing capabilities (such as 3D printing), socioeconomic trends, and the effects of the worldwide economic crisis in 2008/2009, the manufacturing industry recognized the need to make their production facilities as well as supply and logistic chains more flexible. Globally, this effort would also reinforce economies traditionally strong in engineering and manufacturing.

Politically, the goal is to regain jobs by bringing back production from low-wage countries or reindustrializing regions that have become more concentrated on the service sector. Among others, examples of these efforts include Apple bringing back production to the United States, Adidas bringing back production of customized shoes to Germany, and Local Motors aiming to locally manufacture their 3D-printed cars.

These trends have motivated various initiatives worldwide such as Industrie 4.0 in Germany and the Industrial Internet in the US.

The term Industrie 4.0 (for further details, see the “Visual Computing as a Key Enabling Technology for Industrie 4.0 and the Industrial Internet” article by Jorge Posada and his colleagues in this special issue.) refers to the “fourth industrial revolution,” or the introduction of Internet technology in the manufacturing industry to render factories more intelligent; increase adaptability, resource efficiency, and ergonomics; and integrate customers more closely into the product definition stage as well as business partners into the value and logistic chains. Technologies such as cyber-physical systems (CPS) and the Internet of Things are driving factors behind this idea. Although somewhat controversial, a visionary interpretation of the concept is that digital (as well as physical) products will become self-aware and that production machines should be able to autonomously manufacture customized products without centralized preplanning of the whole production process.

The term Industrial Internet, coined by General Electric in 2012, refers to the integration of complex physical machinery with networked sensors and software—very similar to the Industrie 4.0 idea of introducing networked sensors with Internet connectivity and embedded systems into production machinery. The Industrial Internet, sometimes also referred to as the Industrial Internet of Things (IIoT), draws together fields such as machine learning, big data, and machine-to-machine communication to collect data from machines, analyze it (often in real time), and use it to adjust operations. This initial idea has expanded to scenarios covering autonomous driving, healthcare applications, and so one, and it has generated a big movement in the US organized in the Industrial Internet Consortium (www.iiconsortium.org), with more than 100 members comprising companies such as GE, IBM, Intel, and Toyota.

Visual Computing

The term visual computing is now omnipresent in the computer graphics community, but there are various interpretations and not one single definition. (The following extended definition is based on the description of visual computing at www.technopedia.com.) Visual computing deals with the acquisition (capture of real world objects or phenomena), analysis, and synthesis of visual data through the use of computer resources (see Figure 1). It encompasses computer science, mathematics, physics (for example, in the case of simulation), and the cognitive sciences. Visual computing aims to let us control and interact with activities through the manipulation of real or virtual objects, including representations of nonvisual objects. The media involved can be tangible objects, images, 3D models, videos, and abstract representations such as block diagrams or even simple icons.

Graphic: Conceptual model of visual computing. This field deals with the acquisition, analysis, and synthesis of visual data through the use of computer resources.

Figure 1. Conceptual model of visual computing. This field deals with the acquisition, analysis, and synthesis of visual data through the use of computer resources.

Several computer graphics disciplines are involved in visual computing:

  • image processing to manipulate images (pixels);
  • computer vision to derive information (models and semantics) from images;
  • rendering to generate images (pixels) out of representations (such as 3D models);
  • modeling and simulating to generate digital objects (such as geometric or physically based modeling) and perform model-based simulation, eventually creating enhanced models;
  • human-machine interaction (HMI) designed to bridge the virtual and real worlds;
  • capturing and acquiring images or sensor data from the real world and then deriving 3D models; and
  • modeling the properties of real-world objects, such as appearance, behavior, and function.

Visual Computing and the Industrial Initiatives

Visual computing will be a key enabling technology to design and implement smart and cognitive behavior in product engineering and manufacturing as envisioned by the Industrie 4.0 and IIoT initiatives. This trend is considered to be the next big game changer for industrial engineering and manufacturing. Propelled by the convergence of machines and intelligent data and the computational technologies for 3D printing, the Maker movement, robotics, and advances in materials science will create new challenges and drive computer graphics and modeling research, taking graphics back to its CAD and Sketchpad roots. The following examples illustrate some challenges the Industrie 4.0 and IIoT initiatives impose on visual computing.

End-user-oriented front-ends (natural user interfaces) are needed to customize existing or design new products. To do so, we need easier-to-use modeling tools. For example, sketch-based modeling has been around for a while, but 3D in Web browsers and affordable 3D printing (see Shapeways, www.shapeways.com) have made this a requirement for the masses of Internet users.

Interactive geometric 3D design and functional modeling for CPPS—that is, virtual engineering tools—are necessary for new representation schemes facilitating self-aware virtual product and production definition. This aspect is directed toward professional users that need to design, simulate, and test the geometry and autonomous behavior of products in production systems.

New generations of virtual and digital factory tools must be able to cope with self-configuration, adaptation, and autonomous behavior. Therefore, virtual environments for testing self-configuration are required. This is a field where synergies with autonomous robotics and driving are likely to occur.

New capabilities and approaches for production are challenging traditional design and engineering systems. For example, multimaterial 3D printing offers the opportunity to print 10 million voxels per square inch and more. Thus, in the future, it will be necessary to represent (and design with) advanced materials, including biomaterials, metals, composites, and polymers.

Even if production machines become more intelligent and connected in the future, human beings will still play an important role. Therefore, new forms of communication (visual or auditory) need to be developed for bidirectional human-machine communication. Scene analysis and intention recognition, for example, will be needed for improved machine-worker collaboration.

In the short term, CPPS will not be autonomous or powerful enough to make all necessary decisions purely locally. For the time being, systems will need to be simulated in advance and synchronized with computer models, for example, using a virtual factory tool. To this end, it is important to capture the dynamic processes carried out in reality (articulated production machines or robots) and transform them into computer representations (models) to support planning tasks on current information. Computer vision approaches for 3D reconstruction of dynamic processes on the shop floor can help keep physical and virtual representations in sync.

In parallel with other developments sketched out here, simulation will increasingly be brought to the machine level as we move away from the preplanned behavior implemented in rule sets and toward simulation/optimization algorithms that support autonomous decisions, using general-purpose computing on graphics processing units (GPGPU) approaches for example. Thus, simulation approaches will need to support local decision making at the machine level.

In the next years, we will see more augmented reality (AR) applications being brought into real use on the shop floor because machines are becoming increasingly complex and agile. When this is the case, workers may only be able to use them efficiently if they communicate visually and dynamically. Thus, visualization techniques will be needed to communicate intent with machines. The emergence of powerful, low-cost mobile devices and head-mounted displays (such as the Oculus Rift) will make this economically feasible.

Lastly, an increasingly number of sensors are now installed in factories and products, and they are collecting massive amounts of data during both the production and use phases. Visual analytics will thus become a key enabler in companies far beyond just business processes (business intelligence). The role of big data from CPPS will reach into design, engineering, manufacturing, use, maintenance, and reuse as we learn to cope with new types of data and questions.

Toward Cyber-Physical Equivalence

The reality today is that factories are seldom built as designed because real-world physical circumstances often require changes during construction. Such changes are generally not fed back into the digital domain. Theoretically, this could be done of course, but the availability of systems, organizational borders, and so forth prevent this from being economically feasible in practice. Therefore, digital data is quickly rendered out of date, and the mismatches between the planning (digital) and construction (physical) stages lead to problems in the ramp-up phase or when changes are required to an existing factory. During the ramp-up of a production line, collisions between the product and the manufacturing machines are likely, even if virtual clash and clearance checks were successful. (See the article “Maximizing Smart Factory Systems by Incrementally Updating Point Clouds” by Evan Shellshear, Rolf Berlin, and Johan S. Carlson in this issue for a more detailed discussion of this topic.)

Following the Industrie 4.0 vision, this real production challenge motivated my research group to explore 3D capture devices to acquire dynamic processes and combine the acquired information with virtual 3D models to perform planning and replanning tasks based on actual geometric information in an augmented virtuality application.1 We consider this a first step toward cyber-physical equivalence (CPE), where a virtual representation (a virtual replica) of the CPPS is fully synchronized with the physical one in all aspects: geometry, function, and behavior.

The idea is to use 3D capture devices that are fast enough to acquire moving objects or articulated machinery and then stream this 3D information into a virtual environment to facilitate planning tasks. The resulting virtual 3D scene can be augmented with digital objects (such as conveyor belts) to check for collisions between existing parts and those planned to be added to the production line under consideration.

We demonstrate this idea using a miniature robot. For acquisition, we decided to take a low-cost approach using a Microsoft Kinect device (see Figure 2). Note that we do not aim to achieve the highest possible precision here (as needed in quality assurance) but simply strive to achieve high speed and sufficient accuracy. The human user can reach in and directly interact with virtual objects in the scene, such as to manipulate the position or path of a virtual fork lift. The user’s hand is captured by the Kinect. We use the information coming from reality to augment the virtual scene and let users interact with the real objects. If a collision with the virtual elements would be unavoidable, we can halt the robot’s movement. Thus, we are closing the loop from acquisition of dynamic, articulated real-world elements to an augmented factory-simulation system and back to reality by controlling the robot.

Graphic: Toward cyber-physical equivalence. A Microsoft Kinect device allows users to directly interact with both virtual and real objects in the scene. The system detects new obstacles on the fly and adapts accordingly.

Figure 2. Toward cyber-physical equivalence. A Microsoft Kinect device allows users to directly interact with both virtual and real objects in the scene. The system detects new obstacles on the fly and adapts accordingly.

Special Issue Research and Application Articles

This special issue contains four articles: the first gives an introduction to the field and the other three address different stages in the design, engineering, and manufacturing process to nicely show challenging applications of visual computing and hint at further research directions.

“Visual Computing as a Key Enabling Technology for Industrie 4.0 and the Industrial Internet” by Jorge Posada and his colleagues provides a detailed introduction to Industrie 4.0 and Industrial Internet as well as similar national initiatives from different countries, such as France, Spain, and the UK. The article systematically maps visual computing technologies and challenges with respect to the broader scopes of these initiatives. Additionally, it presents three project examples that address various challenges for visual computing and stimulate further research.

The second article, “Visual Analytics for Early-Phase Complex Engineered System Design Support” by Rahul Basole and his colleagues, is rooted in the early stages of design. The authors’ work exemplifies how visual analytics techniques can support the design and engineering phase of complex mechatronic and CPS, using a robot as an example.

In “Legibility in Industrial AR: Text Style, Color Coding, and Illuminance,” Michele Gattullo and his colleagues address cognitive issues on the shop floor using AR. This area is growing as an application field for AR. The authors present results of a user study that looked into the legibility of computer-generated content overlaid on top of real-world objects.

Last but not least, the Shellshear article “Maximizing Smart Factory Systems by Incrementally Updating Point Clouds” presents a scenario similar to the example presented here but for production facilities with static geometry. Specifically, the authors look at a rust prevention process line at a Volvo manufacturing plant. The goal is to determine whether new car-chassis models can utilize the existing lines without damaging the chassis or production facility. The authors look for collision-free paths with CG algorithms as well as determine a collision-free design space that can be used for future car modifications. The ultimate goal is to avoid or minimize changes to the existing production lines.

The (r)evolution in Industrie 4.0 is being accelerated by the wide adoption of networking and Internet technology into traditional industries, such as manufacturing shop floors aiming to implement CPPS. Furthermore, advances in additive manufacturing, modeling systems, physics-based simulation, and the computational representation of materials has created a framework to enable new intelligent factories characterized by adaptability, resource efficiency, and ergonomics.

Future product designers and production/manufacturing engineers will be able to exploit changing environments where clients increasingly ask for customization products that require highly flexible production. Such products may need to designed and fabricated out of a myriad of materials.

This special issue serves to familiarize the computer graphics community with the Industrie 4.0 and Industrial Internet initiatives and mobilize more computer graphicists to initiate research and application development in this emerging and attractive area. We are convinced that computer graphics, modeling, and machine vision will play a key role in putting the human right at the center of this evolving field.

Reference



André Stork is the head of Interactive Engineering Technologies in the Fraunhofer Institute for Computer Graphics Research (IGD) at the Technische Universität Darmstadt, Germany. His research interests include geometric modeling and semantic shape processing, 2D/3D interaction and user interface techniques, rendering algorithms, knowledge management and information retrieval, collaboration support, and scientific visualization. Stork had a PhD in computer science from the Technische Universität Darmstadt. Contact him at andre.stork@igd.fraunhofer.de.
FULL ARTICLE
CITATIONS
66 ms
(Ver 3.x)