OCTOBER 2007 (Vol. 40, No. 10) pp. 6-7
0018-9162/07/$31.00 © 2007 IEEE

Published by the IEEE Computer Society
  Article Contents  
  NIST Security Standards  
  Consciousness and Computers  
Download Citation
Download Content
PDFs Require Adobe Acrobat
NIST Security Standards
Computer recently published two interesting articles on NIST security standards, the first by Feisal Keblawi and Dick Sullivan ("The Case for Flexible NIST Security Standards," June 2007, pp. 19–26) and the second by Ron Ross ("Managing Enterprise Security Risk with NIST Standards," Aug. 2007, pp. 88–91). These articles discuss NIST standards FIPS 199 and 200 and NIST Special Publication 800-53, which mandates security "controls" on federal civilian information systems.
I would like to make the following points with regard to these articles.

    • Systems are being categorized as low, medium, or high without using the tailoring feature and evaluating them for their compliance with the low, medium, or high controls. In other words, risk assessments have ceased to be real RA but turn into a rating on how the system conforms to 800-53. The exposure of the system to the various threats is not considered. A system that is behind locked doors with no access to the Internet gets the same assessment as a system connected full time to the Internet.

    • Agencies are being rated on a scorecard using the 800-53 compliance ratings of their systems, not on the true risk to public safety and agency mission.

    • Under NIST standards, if a system has a high confidentiality requirement, it must have the same controls as one with high availability. Providing protection for high confidentiality can negatively affect availability. A weather information system for controllers managing an airport's traffic must be available 24/7. If the system goes down, aircraft safety could be affected. The data is not confidential. The system should not lock up if the user does not touch it. 800-53 control, AC-11, requires lockout, and AC-12 requires session termination after a period of inactivity. However, if a system administrator using the system fails to sign out, the system should revert to the safe controller mode, not stop displaying the weather.

    • Safety must be rule number one for critical infrastructure systems. Security controls must not negatively impact safety.

    • The NIST standards don't consider the interactions when there is a "system of systems" with multiple interconnected systems. Security vulnerabilities that impact multiple systems are a much higher risk than ones that just involve a single system. One system is a backup if another system fails.

If voice is a backup for an automated system, voice-over-IP on the same network as the automated system means that any network vulnerability could disable both.
James E. Hooper
Feisal Keblawi and Dick Sullivan respond:
We concur with Mr. Hooper's comments. NIST in general, and Ron Ross in particular, have performed a great service by developing initial guidelines for security control selection. We agree with Mr. Hooper that risk assessments—and not solely impact assessments—must remain the over-arching context for control selection. Further, agency performance metrics need to rest on risk instead of using a NIST compliance scorecard. Finally, we agree that NIST should extend current risk methodology to address systems of systems. NIST should not mandate standards before their time, that is, before agencies can meet the practical challenges of applying them.
Ron Ross responds:
James Hooper makes some very thoughtful and interesting points.
The risk assessment is still very much alive and well in the NIST risk management framework and supporting security standards and guidelines. The baseline security controls recommended by NIST are just the starting point for an organization as it develops the security plans for its information systems.
The flexibility available in using the tailoring guidance in the NIST standards and guidelines works hand in hand with an organization's risk assessment to ensure that it selects and implements the right safeguards and countermeasures within an information system to provide adequate protection for enterprise missions and business functions. Risk is considered at every step in the process of building and executing the security plans for information systems, with specific and credible threat information taken into account when available.
The NIST risk management framework is not about applying a static set of security controls to an information system; rather, it is about employing a disciplined and structured approach to managing the risk to enterprise operations and assets, individuals, other organizations, and the nation that arise from the use of information systems.
In the air traffic example that Mr. Hooper cites, the tailoring guidance in NIST Special Publication 800-53, either through the scoping considerations or compensating control provisions, would more than adequately address that situation. Information systems with high confidentiality requirements are not required to have security controls for high availability unless the enterprise requires it.
The high watermark requirement is only used to obtain the initial entry point into one of the three sets of baseline security controls, but the tailoring guidance can quickly reduce unnecessary controls based on the organization's true requirements.
One of the core principles in the application of the NIST security standards and guidelines is that the organization's mission always takes precedence, and the application of specific security controls should never degrade or adversely affect the enterprise's missions or business processes. Above all, information security is a mission and business enabler with respect to reliability, fidelity, and quality.
NIST will continue to address the system of systems issues with regard to information security in its upcoming Special Publication 800-39, Managing Enterprise Risk: A Framework for Addressing Cyber Threats to Organizations, Individuals, and the Nation (projected for publication in October 2007). Trust relationships among cooperating/partnering organizations and the trustworthiness of information systems involved in those partnerships will be two of the key concepts addressed in that publication.
Consciousness and Computers
I disagree with the idea that the conscious mind is not mysterious (Neville Holmes, The Profession, "Consciousness and Computers," pp. 100, 98–99, June 2007): It is mysterious and also misunderstood, particularly by adherents of the matter-only school of philosophy.
Indeed, the conscious mind with its subjective/objective split is a conundrum that is perhaps the most mysterious thing in the universe. Although this wonderful mystery is right under our noses, many people never notice it—it takes a sort of lateral thinking to see what philosophers mean when they talk of the subject and the object.
The subjective aspect of consciousness is the most mysterious of all, yet it is readily accessible. There is no slippery slope where the objective and subjective melt into one another. They may be subtly interwoven, but equivalent they are not: no more than chalk and cheese.
There is a binary flip here: I have internal states and perceptions in the first person, and Jim, the neuroscientist, observes objective processes in my brain in the third person. That's the "twin paradox" of consciousness: Jim's first-person processes are my third-person ones as I measure his EEG, NMR, PET, and so on. Brain processes are third person, but what I see in first person is mind. Somewhere in between is the interpersonal.
Nothing objective is purely temporal or spatial—space-time sees to that. What exists are space-time processes of a material brain changing along the time dimension of the 4D space-time continuum. These processes are just the objective neural correlates of consciousness (NCC). On the other hand is the internal virtual reality, possibly accessing internal dimensions. That is the mind: You can't really see the mind or measure it—you only ever measure the NCC from outside. Mind can only be measured from within.
Hugh Deasy
Neville Holmes responds:
I'm sorry my attempt to remove the mystery doesn't seem to have been clear enough. A simpler version might go as follows: Neural systems start with perceptions and convert them to behavior. In complex animals, much of the perception and much of the behavior must be automatic because the processes forming the mind are too complex to allow making conscious decisions.
The mental path from sensation to motion therefore passes from the unconscious formation of perceptions to the conscious high-level decision making to the unconscious sequencing of muscular actions. In other words, consciousness resides in those mental processes that mediate perception and motor behavior. Thus, humans and flies both have consciousness though of quite different qualities.
This is rather nicely brought out in the recent work on out-of-body experiences reported, for example, in The Economist ( www.economist.com/science/printerfriendly.cfm?story_ID=9682520).