The Community for Technology Leaders
RSS Icon
Subscribe

Letters

(HTML)
Issue No.04 - April (2003 vol.36)
pp: 7-9
Published by the IEEE Computer Society




IP RIGHTS: INNOVATION VERSUS STANDARDIZATION
To the Editor:
In "IP Rights in Industry Standards" (Feb. 2003, pp. 25-27), Daniel Lin offers an excellent discussion of the legal problems of incorporating proprietary IP into industry standards. However, the root cause of problems is the attempt at "engineering by setting standards."
The goal of standards should be to stifle innovation in the area being standardized. Standardization means we no longer have to innovate in that area because it is already optimized. Standardized electrical outlets eliminate the need to innovate in that area. Innovation can be directed to the devices that use the electricity instead of wasting it on tinkering with the connection.
Interoperability does not require standardization. An alternate solution, the use of adapters or converters, has worked successfully for many years in fields as diverse as software or plumbing. Consider graphics formats as an example: formats proliferate; converters abound; innovation continues. Purists might complain about a lack of elegance, but most programs, even inexpensive shareware, can deal with most formats.
Setting standards before the technology has matured, or in some cases before the first product is built, is scientifically unsound. It amounts to reaching a conclusion before the experiment is run—a basic violation of the scientific method. No amount of juggling the rules of IP law or policy will make this approach sound.
Innovation and proprietary IP rights should advance together; standardization should come later.
Tom Vaughan
Stoughton, Mass.
vaughaTJ@ix.netcom.com
Research Model Options
To the Editor:
It was intriguing to read "Computer Electronics Meet Animal Brains" (Chris Diorio and Jaideep Mavoori, Jan. 2003, pp. 69-75) and realize that our electronic technology is advancing to the point where it can benefit us in ways that we've only dreamed of. Finally, that robotic hand, kidney, pancreas, or heart might soon be within reach.
However, I was disappointed by the authors' reliance on animal models when methods such as epidemiology, in vitro research, mathematical modeling, cadaver studies, and clinical observation would yield information that is directly applicable to humans. Science is about generating information that allows us to predict what will happen in a target system. If studies on humans can't even assume that what works on men will also work on women, then it is practically impossible to show how learning about animal behavior translates to providing significant knowledge that will lead to human prosthetics.
The reason human models are better is simple: The DNA of all other living creatures has a different composition than human DNA, thus their molecular biochemistry is different. When researchers use human-based models, it isn't necessary to make extrapolations from unrelated systems.
How can we rely on a study's predictability when we don't know what the variables are? It isn't possible to determine whether the test conditions in animal models inhibit some behavior that we don't know about or cause discomfort or side effects that can interfere with the results, making the information's usefulness questionable.
Neglecting to explain why this project received funding from the Office of Naval Research gives the impression that the authors are hiding the true purpose of this technology behind the prestige of human prosthetics. If the goal is to program animals to kill, we won't care how the technology applies to humans.
Rather than being an effort to evade modern research methods or to develop lethal technology, I think it's more likely that using animal-modeled research is about doing things the way they've always been done. Hopefully, future research will focus on human models. The technology will make it to the public faster, it will be more reliable, and researchers will avoid inflicting needless human or animal suffering.
Mark Whitt
Toledo, Ohio
mark@whittfamily.com
Revised Technorealism Principles
To the Editor:
Although I always enjoy reading Neville Holmes's The Profession column, I do not agree with his assertion that only people process information, while machines only process data ("Revising the Principles of Technorealism," Jan. 2003, pp. 128, 126-127).
This assertion implies that there is an inherent distinction between human processes and machine processes, but we know this is not the case. Some processes are identical, others are different; some are different today but they could be the same at some future time, given the progress of data/information processing systems.
Some processes are the same for humans processing information and machines processing data. For example, when I add 2 and 3, it is essentially the same process that goes on in Excel.
At the beginning of artificial intelligence, there were heated debates on the issue of whether machines, specifically, computers, are intelligent and whether they could be or are more intelligent than humans. These debates continued until everybody eventually agreed that the Turing test was the proper yardstick for this issue. However, in the past 30 years, the case for machines being less intelligent than humans has been made in a variety of ways, the form changing to adapt to better and faster hardware and software without harming the human ego:

    • Machines are not as good at chess as the best human players.

    • Machines do not learn.

    • Machines may be better at playing chess, but they don't play the game as humans do.

As an alternative to the assertion that Holmes makes, I would offer the well-known declaration, "Do not suppose that the machine supposes." It seems to me that this captures the intention, namely that we should understand how machines work and know how they process data/information.
We should not assume that machines can do what we do. On the other hand, we cannot completely exclude that they might.
Christophe Alviset
Paris, France
c.alviset@infonie.fr
Neville Holmes responds:
The standard definition of data has them as representing facts or ideas. Instructions inside (or outside) computers are thus data. The data stored in computers and transmitted over networks are simply representations of facts or ideas. The standard definition of information is that it is the meaning that people give to ideas.
The data are the formal representations, which can thus be processed by machines—or by people, for that matter—but information only exists within people's minds.
These two definitions, from a formal international standard ( IFIP-ICC Vocabulary of Information Processing, North Holland Publishing, Amsterdam, 1968), are of course out of line with common usage. But computing professionals have a responsibility to adopt and adhere to a standard terminology. In this case, the wise computing professionals of the 1960s put a lot of effort and thought into developing the standard, and it is up to us to respect it.
By defining the profession's two most fundamental terms as they did, the people who put our standard vocabulary together provided us with a simple method for distinguishing people from machines. This is not a technical issue—it's a professional issue. Whether machines might or might not be as effective as, or more effective than, people in various ways is irrelevant.
To the Editor:
In "Revising the Principles of Technorealism," I am slightly confused about the difference between Neville Holmes's version of the seventh principle and the one he cites from the technorealists' manifesto. Is it only the insistence that the radio spectrum cannot be owned? How does that really differ from positing ownership in an abstract entity such as the "public?"
Mike Barnett
Redmond, Wash.
mbarnett@microsoft.com
Neville Holmes responds:
Your confusion is understandable. I was trying to make too many points in the space of a few paragraphs.
First, there is in principle a significant difference between the ownership of property asserted in the original principle, and the licensing of behavior that my restatement embodies.
Second, there is a significant difference, in my mind at any rate, between the public benefit from the original principle and the community control for the public good in my restatement. The public can benefit from the dividends paid by the commercial enterprises that control the use of electro- magnetic radiation. But emphasizing community control makes the point that the primary purpose of that control should be the public good, not the maximization of profits.
Third, and indirectly, my rewording was intended to make the point that the "airwaves" of the original principle should properly apply to acoustic rather than to photonic signals.
To the Editor:
I am very glad to find that someone else thinks the morphing of "data processing" into "information technology" is positively harmful, as is the idea that having more information also means having more knowledge. As Holmes says, people process data into information, which in turn is processed into knowledge (sometimes). Tom Watson used to say, "Machines should work, people should think." He could distinguish these concepts.
Thomas R. Leith
St. Louis, Missouri
trl@computer.org
Neville Holmes responds:
Unfortunately, the most frequent views I get on this terminological issue are along the lines of "Well, the public at large and most of the profession regard data and information as almost synonymous, so we just have to go along with it." My usual reply is that we have a professional responsibility to look after our terminology, and that these are the two most important terms we have.
20 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool