The Community for Technology Leaders
RSS Icon
Subscribe

Letters

(HTML)
Issue No.11 - November (2007 vol.40)
pp: 6-7
Published by the IEEE Computer Society
Multicore Challenges
Regarding the September Technology News article (David Geer, "For Programmers, Multicore Chips Mean Multiple Challenges," pp. 17–19), I would like to point out a very important detail about the programmer's role in multicore implementation. Obviously, the author has no idea who or what controls the number of cores or CPUs utilized on a machine.
The software usually does not decide which of the cores or CPUs will be used in a multicore or multiprocessor computer. This is the kernel's job, and the operating system kernel must support multicore/multi-CPU hardware to be able to utilize all of it.
There are several techniques that allow the software to exercise that control, such as forking, multithreading, and so on. If these techniques are used appropriately, the kernel takes care of the separate processes or threads. Multithreaded programming is more complex than regular procedural programming because of the possibilities for deadlocks and data altering. On the other hand, forking (subprocessing) is more difficult from the point of using common data.
The point is that programming for multicore machines is not that different from single-core programming from the functionality perspective because the application layer has no role in controlling which CPU core will execute a thread or a subprocess. The application programmer should be able to create a valid thread/process control only in the application itself, but the programmer should not be able to touch the kernel at the application layer, especially to control processes or threads. The languages and development platforms listed in the article provide better application thread/process control, but in no way do they influence the kernel side of thread or process execution.
Evgueni Tzvetanov
boonslang@hotmail.com
David Geer responds:
While some of the writer's statements are true, they don't conflict with the article.
First, let's clear up the context. The article primarily discusses multicores for PCs and servers. It does not discuss multiprocessors—which are not the same thing—except in the context of introducing the topic. Supercomputers, which use multiprocessors, are barely mentioned.
Application programmers for high-performance computers and multiprocessors have more experience writing applications for those systems. My intention was to address the challenges facing PC and server application developers who might not have a lot of experience writing applications for multicores on these systems.
The article does not say that PC or server software decides "which" of the cores it will use. Rather, the article discusses developing modern PC and server software so it can perform more optimally on multicores.
One example of this is dividing small elements of processes so that they can then be assigned to different cores—rather than remain bunched up on one of the cores—to further speed the arrival of computational results. This is found under the subhead, "Dividing activities into smaller parts."
The fact that this is the stance of the article is clear from paragraph 14, where I refer to Thomas Sterling's input: "Developers write applications for multicore chips so that they will divide into tasks that the operating system can assign to run in parallel on multiple cores."
None of the examples or comments about addressing the challenges to optimizing for multicores takes away the operating system kernel's position of then deciding which cores to assign these newly divided tasks to. In fact, Sterling's statement clearly separates what the application developers and their software applications do from what the operating system, where the kernel resides, does. The author has pointed up the latter, assuming that it excludes the former. It does not.
Yes, the kernel must support multicores to use them all. But the software also must support multicores to use them all, or it will default to using only one core. Dividing these applications will increase their performance on multicores by helping them take advantage of parallelism.
The challenge for the PC and server application programmer is in how to better divide the threads and subprocesses so that the divisions that the kernel must work with perform better in the multicore environment.
Network Access Control
In "Protecting Networks by Controlling Access" (Technology News, Aug. 2007, pp. 13–19), Sixto Ortiz Jr. raises certain good points about using network access-control technology as an integral part of network security.
The motivation for NAC is clear—authenticating users and scanning their machines for security compliance before allowing them onto the LAN. However, is NAC the optimal approach to ensure that a network administrator is not taken by surprise? How do administrators deal with verified devices that become infected or are hacked once on the network?
An effective NAC system must have the capacity to identify a new device connecting to the network and then ensure its adherence to the security policy. Most companies use the Dynamic Host Configuration Protocol for IP address assignment, which serves as a good choice for recognizing new devices. However, devices using DHCP must be equipped with agents, and most of these agents are Windows clients. Moreover, users can easily bypass DHCP security by using static IPs.
An authentication approach is useful for restricting access to network servers and services, but the devices that need verification are usually unmanaged and do not initiate authentication. Another common approach for network access is through the 802.1x standard, which detects entry to the network before the end point obtains an IP address. The drawback is that upgrading the infrastructure could incur high costs. Not only must all network devices be configured for 802.1x, but they must also be integrated with the authentication server.
As for OS patch-level checks, a minimum patch level should be tested on end-point devices. However, requiring the latest patch level for every end point that enters the network could be difficult and time-consuming. Enforcing highly restrictive policies can create a high volume of demand for the IT department and result in an unhappy user population.
The extent of control that a NAC solution can provide will depend on its architecture. An out-of-band architecture can fulfill the need for guest access control while inline NAC architectures perform well for restricting user access after they are on the LAN.
It's important to sort through both the capabilities NAC can offer and the necessary LAN security requirements. Preadmission is only one piece of the puzzle. For a robust NAC solution, postadmission NAC, quarantine, and remediation capabilities are equally important.
Hong-Lok Li
lihl@ams.ubc.ca
Human Consciousness versus Cognitive Fluidity
I was rather concerned by the strong analogy that Neville Holmes draws between human consciousness and computer architecture in his July 2007 The Profession column ("Consciousness and Computers," pp. 100, 98–99). On the basis of current human neurocognitive and primate neurological evolution, I fear that his comments are not based on the most recent concept for the modular primate brain. "Swiss penknife architecture" has evolved in modern humans to the so-called "cognitive fluidity." It is the integration of the brain modules for social skills, tools usage, and natural-world knowledge, bound by language and the rather weak general intelligence of most great apes, that forms the basis for consciousness (see http://psy.ucsd.edu/~dmacleod/141/presentations/OliviaConsciousness.ppt#279,21, References http://psy.ucsd.edu/~dmacleod/141/presentations/OliviaConsciousness). Although it sadly is not widely used in technical education, The Prehistory of the Mind (Thames and Hudson, 1999) by Steven Mithen offers an excellent explanation of the basis of consciousness. Unfortunately, this topic still tends to be dominated by mid-20th century ideas, which have little foundation in modern neurocognitive studies using MRI and PET technologies. One of the most interesting results of this knowledge is in the handling of autistic spectrum conditions, and I would recommend reading the work of Simon Baron-Cohen from Cambridge on this topic. His concept of mindblindness, which is especially a problem for the science and engineering communities, gives good insight into why brain architecture and cognitive fluidity is such a difficult topic for engineers to grasp. It also has several implications for the evolution of communication systems in the next 20 years.
Keith Baker
kbaker@iae.nl
Neville Holmes responds:
If the analogy seemed strong to you, that was because I was trying to get readers to think about how they think and its implications for the computing profession. If you read my column of May 2002 ("Would a Digital Brain Have a Mind?" pp. 112, 110–111), you will find my argument that our brains and our digital computers are poles apart and that the anthropomorphic attitude all too often found in popular and other writings on computers is wrong not only factually but morally.
I would suggest also that my essay doesn't contradict the ideas of cognitive fluidity. Coincidentally, The Economist recently had an interesting relevant item: "Evolutionary Psychology: More News from the Savannah" (29 Sept.; www.economist.com/science/displaystory.cfm?story_id=9861405).
28 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool