Letters
FEBRUARY 2008 (Vol. 41, No. 2) pp. 6-9
0018-9162/08/$31.00 © 2008 IEEE

Published by the IEEE Computer Society
Letters
  Article Contents  
  Metadata and Ontologies  
  Killer Robots  
  Doing More with Less?  
  Green Computing  
Download Citation
   
Download Content
 
PDFs Require Adobe Acrobat
 
Metadata and Ontologies
The article "Toward a Social Semantic Web" by Alexander Mikroyannidis (Web Technologies, Nov. 2007, pp. 113–115) displays a fundamental misunderstanding of the nature and limitations of both metadata and ontologies.
Cory Doctorow provided one of the best critiques of the limitations of metadata ( www.well.com/~doctorow/metacrap.htm).
One limitation of an ontology is that it is a data model of entities over a domain ( http://en.wikipedia.org/wiki/Ontology_(computer_science)).
Although the relational data model (E. Codd, "A Relational Model for Large Shared Databanks," Comm. ACM, June 1970, pp. 377–387) can be used to represent any model of data within a domain, it does not address the semantics of any database because

    • defining an entity (relation) is arbitrary (W. Kent, Data and Reality, North Holland Publishing, 1978);

    • partitioning an entity into a hierarchy (Codd's normalization) is arbitrary, and there is no a priori best hierarchy for this partitioning (W.S. Jevons, The Principles of Science, Dover Publications, 1874); and

    • partitioning a set of sets (concept domain) into nonoverlapping subsets is an NP-complete problem and has no polynomial time-limited algorithmic solution. The best that can be done is to test a given partitioning to see whether it has overlapping subsets. If you do not require nonoverlapping subsets, then any arbitrary partitioning will do, but you will not be able to use it for reasoning about the domain.

This makes the creation of an ontology (from a folksonomy or any other source) an arbitrary exercise of the author, and it reflects all of the author's unstated assumptions and prejudices.
Rainer Schoenrank
rschoenrank@computer.org
The author responds:
Thanks for taking the time to read my article. I'd like to comment on two points:
You argue about the arbitrary nature of ontologies. As explained in the article, my proposal addresses this inherent drawback in ontologies by introducing tagging consensus into ontology construction.
You question the general contribution of ontologies and metadata in knowledge management. I need not argue against your views—many people have already done that for me. I suggest reading, for example, the following:

    • J. Davies, D. Fensel, and F. van Harmelen, Towards the Semantic Web: Ontology-Driven Knowledge Management, John Wiley and Sons, 2003.

    • M. Daconta, L. Obrst, and K. Smith, The Semantic Web: A Guide to the Future of XML, Web Services, and Knowledge Management, John Wiley and Sons, 2003.

Alexander Mikroyannidis
a.mikroyannidis@ieee.org
Killer Robots
Noel Sharkey might want to add the landmine to his collection of robotic killers (The Profession, "Automated Killers and the Computing Profession," Nov. 2007, pp. 124, 122–123). It's a very dumb robot and can't move about, but it will kill indiscriminantly—child, adult, soldier, farmer, friend, or foe.
The landmine models many of the problems Sharkey mentions pretty well, especially not having a human in the loop. Some suggested solutions include having a timer that deactivates the mine after some period of use. Maybe deactivation will be required for robots as well.
Charles J. Neuhauser
cneuhauser@earthlink.net
The author responds:
I agree that landmines are a type of robot, and their power to kill innocents long after wars have been fought is well-known and well-argued. The old antipersonnel mines will kill on contact and are like other reflex weapons such as the Navy CIWS. They fall foul of the notion of just war because they kill indiscriminately and there is difficulty in assigning responsibility for mishaps.
In 1997, 153 countries signed the Ottawa Mine Ban Treaty, but not the US, China, or Russia. President Clinton had planned to join in 2006, but George W. Bush abandoned the plan in 2004 because it would mean giving up a "needed military capability." The US policy was to move toward mines that self-destruct ( The Lancet, 27 Aug. 2005; www.thelancet.com).
I did not include mines in the article because I wanted to focus tightly on creating discussions about new technological threats to humanity in the form of mobile autonomous weapons that will actually make decisions (so to speak) about who to kill. This is where my expertise may be of some use.
Unfortunately I have recently heard of a new breed of mine that is meant to "intelligently" determine friend from foe and fire torpedoes at the latter. These do come under my remit, and I am investigating further for future articles.
Noel Sharkey
noel@dcs.shef.ac.uk
Doing More with Less?
Simone Santini's perspective on what we need for computers is painfully on target (The Profession, "Making Computers Do More with Less," Dec. 2007, pp. 124, 122–123). However, there is one fatal omission in his discussion of an ideal device.
Once you have a simple computer that does what you need, it is very likely not to require replacement or need new software for many years. This is an economic disaster. Software vendors from Redmond to Rangoon and hardware vendors from Santa Clara to Beijing are addicted to regular fixes of money, which are inversely proportional to the life span of a given hardware-software platform.
When I get my One Laptop Per Child computer (see www.laptop.org), I may actually have that simple device. We will see.
Jim Isaak
j.isaak@snhu.edu
The author responds:
I assume that your definition of "economic disaster" is more than a bit tongue in cheek. Of course hardware and software vendors like the explosion of sales they provoke with every new release cycle, but they know (or they should know) that they are playing a risky game. They are generating such unreasonable get-rich-quick expectations in their stockholders that now if they meet expected revenues rather than exceeding them by a fat margin, their stock loses value.
What manufacturers don't like to hear is that after the initial transient in which every Tom, Dick, and Harry on the planet wants to buy a brand new product, they will have to accept a market plateau. The computer industry, it seems, is focusing on extending the transient beyond the limits of ridiculous rather than preparing for the plateau.
Well, what can I say? The day that software executives notice, sadly, that they can't afford a second private jet, I will shed a tear for them.
Simone Santini
simone.santini@uam.es
I read Simone Santini's article with great interest, and I agree with most of the issues he raised. In fact, an operating system that fits his description already exists. It's the IBM OS/2, which runs on a 25-MHz processor, can run with 8 Mbytes of RAM, and can install in 100 Mbytes of disk space.
The system was in active development from 1987 to 1996 (with participation of Microsoft until 1992, when it dropped out of the OS/2 effort to pursue its Windows product instead). At that time, processors were obviously much slower, and RAM was much more expensive. For this reason, most of the OS/2 kernel was written in assembler. But this does not prevent it from running on the latest dual-core Intel Core 2 and AMD Athlon64, and it runs fast.
I disagree with the comment about colors. At the time the windowing interface was being designed, IBM hired lots of psychologists to study visual perception effects on user interfaces. The outcome was the 1995 Common User Access standard in use today. CUA is based on a consistent user interface, not necessarily pretty, but that can use color icons and configuration notebook tabs.
The problem with application installation stems from the use of DLLs or shared libraries, which represent a major design decision by IBM because they help use RAM much more efficiently. As long as the system uses the same version of the DLLs for most applications, the DLLs stay in memory and are shared between processes. That's what makes the system lightweight and fast. Unfortunately, the price to pay for this is complexity of installation and updates. This was solved with the configuration-installation-distribution facility that is similar to package manager in AIX or RedHat Linux RPM, but predates it by a decade.
OS/2 still enjoys a small but devoted following, especially in Europe, but IBM stopped supporting it in 2003. Although the system has clear technical advantages over its rivals, it never was a commercial success. Today, there is an OEM distribution of OS/2 called eComStation ( www.ecomstation.com).
Vadim Kavalerov
Vadim.Kavalerov@sig.com
The author responds:
From my point of view, the main point of this message is the statement that "the system was in active development from 1987 to 1996." At that time, I assume, things got out of hand (thanks, I suppose, to Windows 95), and the race to useless features and monster operating systems was under way.
I don't know OS/2 well enough to express a technical judgment on it but, if what you say is true (and I have no reason to believe it isn't), its demise is a good example of the trend I was criticizing in my article.
Now I would like to see a computer company with enough courage to produce a laptop with 300 Mbytes secondary memory and a 50-MHz CPU that is light, thin, and has a very long battery life running one of these stripped-down operating systems (and equally stripped-down programs). Even more, I would like to see a public with the culture, intelligence, and resistance to commercial pressure to make such a computer a success.
I partially buy the argument about color. Partially, because I don't think there is anything ergonomically significant that can't be done with 16 colors. Moreover, I stand by my opinion: If a black-and-white screen can buy me a couple of hours of battery life, I'll go for it!
Simone Santini outlined proposed guidelines for "a fast and efficient system." Are these guidelines realistic? Today, are computers really "meant to be for people who use a computer as a work instrument" or merely used to "write a mathematical paper?" A computer with a monochrome display and an operating system with less than 50 Mbytes storage is more like an early-stage terminal.
A computer is a complex piece of machinery consisting of many components, each of which is a separate invention. Within the past several decades, the speed and power of the computer have grown at an exponential rate. During that time, computers have evolved from being primarily professional and business machines to become our primary entertainment and educational tools.
Computers have become the heartbeat of the modern world. They communicate. They act. They are our personal assistants. When we are surfing the Internet, participating in a videoconference with colleagues thousands of miles away, viewing with amazement 3D graphics for cars and architectural design, or exchanging e-mail messages, we can't imagine our lives without a computer being involved. Never in history has one invention had such an influence on humanity as a whole. However, without question, a "simple" computer can no longer fulfill our ever-changing demands.
The question is not whether we should make the operating system smaller but what exactly we should do to keep our computers running as fast as new. Performing regular tasks such as uninstalling old and unused software, performing disk cleanup, running hard-disk maintenance utilities, removing spyware/adware, and keeping the security software up to date are practical ways to keep systems running at peak performance.
Hong-Lok Li
lihl@ams.ubc.ca
The author responds:
As I understand it, your point is that computers today perform many functions, some of which require fast CPUs and—this point is more doubtful—large operating systems. Your underlying assumption seems to be that all computers should do all things.
Consider, as a parallel, motor vehicles. People who must transport a heavy load drive 18-wheeler trucks. This, of course, doesn't imply that every activity performed with a motor vehicle requires an 18-wheeler, or that everybody should buy one. Sometimes a small two-seater city car is the perfect solution for a given transportation problem.
The same applies to computers. Some people use them to watch videos or to "view with amazement 3D graphics" (should I suppose that if the people were not amazed, the requirements for the operating system would change?). Other people use computers to write reports and calculate simple spreadsheets. There is no reason why these two groups of people should use the same machines, the same operating systems, and the same programs.
The fact that a computer is a complex piece of machinery built of many components is utterly irrelevant: so is a skyscraper, and so is a car. But, as we have seen in the case of motor vehicles, the device's complexity doesn't mean that we should adopt a "one-size-fits-all" model.
Two statements in this message reveal a profound philosophical and attitudinal difference between the two of us. One is, "Computers have evolved from being primarily professional and business machines to become our primary entertainment and educational tools." There has, undoubtedly, been a change, but why is going from business applications to entertainment an "evolution" in the use of computers? It's a diversification, certainly, but considering it an evolution seems a trifle naïve.
Second, there is the somewhat triumphalistic observation: "Computers have become the heartbeat of the modern world. They communicate. They act." I assure you, they do not. We communicate, we act. Computers do not communicate any more than a telephone or a letter does. Computers are versatile instruments, useful for certain things, not so much for others.
I must confess that I am worried when I see such a triumphalistic attitude among academicians: We should value critical evaluation and detached analysis. There are already plenty of people out there who can write marketing brochures, and there is no need for us to join their ranks.
Green Computing
I was pleased to see the articles in Computer's December 2007 issue covering green computing in various forms. I look forward to the day when the IEEE gets real about the environment and makes its own contribution by offering totally paperless membership.
I have a comment on the practicality of the idea of recycling silicon (Oliver et al., "Life Cycle Aware Computing: Reusing Silicon Technology," pp. 56–61). I'm not sure if "recycling down the food chain," as the authors propose, is practical.
Moving to devices with lower computing requirements also often means moving to bigger markets. The example in this article suggests recycling a PDA processor in a GPS system, and later in a Nintendo DS. Numbers I dug up on the Net suggest that each cheaper device in this list has about a factor of four times the sales of the device a level above. What's more, the sales of the cheaper devices in this example are increasing faster. Also, a few years down the track, a lower-cost alternative for the cheaper device will probably be available.
This kind of recycling would also require factoring in the energy costs of dismantling the device (difficult with components designed to be used once), the shrinkage of damaged components from disassembly, and the higher failure rate of devices that have already seen significant use.
It might, however, be an option to ship off obsolete or recycled parts to poorer countries where labor costs are low and create a cottage industry in building low-end but functional computers with low power demands. Many PDAs, phones, and the like easily have enough processing power to run a stripped-down free operating system like Linux.
This has more appeal to me than the One Laptop Per Child project, which is based on the misconception that owning a computer is in itself an advantage. If thousands of people in poor countries had the direct experience of building computers and massaging software to install on unusual configurations, the skills gained would be a huge boost to the local economy.
Philip Machanick
philip.machanick@gmail.com
The authors respond:
The letter writer raises some interesting issues relating to silicon reuse.
With respect to the volume of lower-end devices in a "food chain," it certainly might not be possible to supply enough recycled parts to meet demand. The goal, however, is to get more use out of the high-end devices and forestall their disposal in landfills.
Recycling costs are also definitely a concern, and our current research focus is on recycling entire systems (such as mobile handsets) instead of individual chips. The cottage industry in "poorer countries" has been suggested, but it should be noted that our industrial collaborators find this a sensitive issue and that, as the writer somewhat suggests, it is important that the recycled technology be an enabler for new applications.
Fred Chong
chong@cs.ucsb.edu