• A computing system might fail because a component has failed. This could mean that the component's producer should be held accountable, or it could mean that the system integrator should be held accountable for deploying the component in an environment the producer never intended. In February 1991, a -Patriot missile battery was deployed in an environment its designers never anticipated when it was run continuously for 100 hours rather than 14 hours; the accumulated clock error left the system ineffective as an antimissile defense, with 28 dead and 98 injured as a result. Unless software components are accompanied by adequate descriptions (functional specifications as well as assumptions about the deployment environment, such as what threats can be tolerated), we can't assign blame for system failures that can be traced to component failures.
• Alternatively, a computing system might fail even if no component fails but nevertheless there are unacceptable (and surprising) emergent behaviors. A long tradition of such surprises exists in bridge design, including the Tacoma Narrows Bridge in Washington State and the Millennium Bridge in London. Moreover, correct behavior for bridges is generally well understood and relatively simple to state, as compared with correct behavior for nontrivial software systems. And unlike bridges, software typically isn't delivered with a paper trail documenting what the system is supposed to do (and not supposed to do), why the design should work, and what assumptions are being made.