Interaction Explosion

By Rick Kuhn
Published 09/30/2019
Share this on:

Cybersecurity Month

 

Nearly 50 years have passed since the invention of the reference monitor concept for access control.  A fundamental building block of computer security, a reference monitor was conceived as a non-bypassable component that was tamper-proof, always invoked, and small enough to allow assurance of its correctness.  Inherent in this concept was the assumption that a small number of variables would be needed to determine if an access request could be authorized.

Times have changed.  The early days of computer security were concerned with variables that were contained within the information system – access levels, roles, compartments, and so on.  Today, cyberphysical systems interact with real world, with vastly more variables, most of which the system has no control over.  Such systems may be networked into “internet of things” configurations, with an enormous number of possible interactions with potentially less control over what is coming into the system; possibly unknown sources, or unpredictable numbers of connections.

Related: During Cybersecurity Month 2019, we offer you the free Oct. 23 webinar “Lessons Learned from Snowden’s former NSA boss: Strategies to protect your data.” Sign up now and get bonus content of three exclusive articles!

Testing of Autonomous Systems Must Reflect the World Real

Autonomous systems add an even more challenging layer of complexity to assurance.  It is well known that new bugs typically appear, and safety and security properties often don’t hold, when software is installed in a new environment.  For autonomous systems, the environment is in some sense always new.  It is often observed that failures or vulnerabilities in these systems arise when the training data set failed to include the unique set of conditions that appeared in use, i.e., the environment changed.  This should not be surprising.  Decades of studies of industrial accidents have shown that nearly all involve interactions of multiple factors, a combination or sequence of events that brought the system to failure.

The same phenomenon is being observed in autonomous vehicle accidents, but the potential for safety failures or security vulnerabilities is being multiplied enormously.  While complex industrial systems of the past century may have numbered in the thousands, autonomous systems of similar complexity will soon number in the millions.

So what is to be done?  Can we ever get a handle on assurance for today’s autonomous systems with vast interconnections and non-determinism?  Yes, one way forward is to provide measures for the degree to which the environments in which these systems are tested reflect the range of conditions that will be encountered in the real world.  Methods from combinatorics and graph theory can help, and this is an ongoing area of research.