The Community for Technology Leaders


Pages: pp. 6-7


In "The End of Science Revisited," (Jan. 2004, pp. 37-43), John Horgan raises a number of interesting points that we must consider in our profession.

I suspect that a parallel article could be found that was written a century ago discounting the possibility of human flight and scoffing at ever reaching the moon. Which is not to say that John Horgan's assertions are wrong or will be proven wrong over time.

Where I most disagree with Horgan is in his conclusion: "Science is never more dangerous than when it seeks to tell us what we are, what we can be, and even what we should be." Science does speak to these first two issues in many ways. It is not the final voice on the subject, but it is not a voice to be rejected.

Where science is most dangerous, in my opinion, is where it assumes that its results will be benign. One example is useful: If we accept Horgan's "scientific theological atheism" and assume that machine intelligence will not emerge, we could pass the "event horizon" that some have anticipated and not be able to get the genie back in the bottle. We only need be agnostic on this point to realize that we might need to consider the implications of technology that could pass beyond our control before we reach the projected timeframes for that event.

I encourage Computer to facilitate this dialogue with views from various perspectives, perhaps tracking the related articles and making them accessible on a Web site. I've posted some related information at

Jim Isaak, Manchester, N.H.,

Regarding John Horgan's musings about whether science is at its end, my response is that it is difficult to say that we know how much we don't know. I could leave it at that, particularly because of the failings of past projections by eminent people.

The author shows a degree of speculation and "faith" when he suggests that we are scraping the bottom of the barrel. He further claims that, "Scientists need a certain degree of faith to bolster their confidence in the arduous quest for truth.…"

First, "faith" is a much-abused word. It applies to concepts ranging from religious beliefs to trust in someone or something. Second, scientific investigation has nothing do with "faith." Scientists reflexively deny faith because failing to do so would violate the scientific spirit.

A scientist's speculations are not bounded by the rigor of the scientific procedure or evidence. Those off-the-cuff remarks are the ones on which journalists thrive.

Rephrasing the author's sentence, I would say that journalists are never more dangerous than when they seek to tell science what it is, what it can be, and even what it should be.

Prasad N. Golla, Plano, Texas,

I did not read John Horgan's book, The End of Science, nor am I inclined to do so. However, I did read "The End of Science Revisited" and was appalled.

Physics and cosmology are still making major discoveries such as dark matter and energy in spite of major cutbacks in big physics budgets. Fusion research is still going on, and the promise of fusion reactors supplying energy is still very much alive despite large cuts in that budget as well.

The rain forest example of an intractable ecosystem to simulate is particularly bizarre. I would think that if someone knew the roles of the flora and fauna, it is reasonable to assume that it would be possible to make a simulation that would yield useful information and information granularity. I suspect that Horgan would then say, "The model's prediction of termite populations was off by 5 percent."

Can he find Nobel laureates to agree with him? Sure—and they can be way off the mark, especially when they step out of their respective fields. And why not "horganics" instead of "chaoplexity" to label so-called intractable systems? He really misses the boat in this area and falls prey to the Deepak Chopra syndrome. If a system is seemingly intractable, then the logic is that quantum phenomena must be a major component, and it will be forever beyond our keen and the province of religion.

Now let's talk about AI. Horgan's major point seems to be that we will never understand ourselves, let alone develop machines that think. This is very much a "the Earth is at the center of the universe" view.

There is pretty much universal agreement that intelligent machines are inevitable and will happen in this century. Will this solve the problems of humanity? Doubtful.

Most human problems require human solutions: people dealing with people on a one-on-one basis. If this is what Horgan is really trying to say, then his thesis is too simple to warrant an entire book and is hardly controversial.

Gary Feierbach, Belmont, Calif.,

I honestly enjoyed John Horgan's article, "The End of Science Revisited." However, at the risk of seeming mischievous, I do think that we live in times when new truths could well lie before our very noses, but we would not know them for what they are if our lives depended on it. Our society emphasizes the certainty of accepted answers, not the wisdom of acknowledging those questions to which answers are not readily forthcoming—the ultimate seeds of science.

We are so blinded by the view that science is relentless progress without any painful revisions of viewpoint that we are as susceptible as ever to unique surprises. When humanity truly desires to entertain a new thought and revise the conventional wisdom, we will progress. Indeed, the new thoughts may already be among us, remaining unrecognized.

Until then, we could be socked repeatedly in the jaw with a cold dead salmon and be none the wiser for it.

Rationality has its limits, not least of all the blind spot of our founding premise. When science recovers the will to ask new questions that challenge the limits of popular understanding, we will move forward. Until then, we remain blinded by the conceit of living off the foresight and courage of thinkers who came before us. They are the best of times and the worst of times. Creativity waits in the wings, but the times are not yet receptive to that.

Truly, it is that simple.

Kingsley Jones, Sydney, Australia,


I was surprised by the inclusion of an article that uses the ITAA as an authoritative source in Computer's January issue (Fred Niederman, "IT Employment Prospects in 2004: A Mixed Bag," pp. 69-77).

The ITAA is a trade organization with the single purpose of advancing the interests of its members, which include many companies that want special treatment in the areas of hiring and fast-tracked importation of cheap labor and are willing to buy legislation to get it. The ITAA was proclaiming a huge IT labor shortage while companies were busily getting rid of their IT employees. Then the ITAA claimed that a huge shortage of IT labor was imminent while companies continued to dump more workers and the recession deepened. Even the computer trade rags have been so embarrassed by repeating claims from the ITAA that they now qualify their citations.

Anyone interested in the veracity of the ITAA and its president, Harris Miller, need only do a Web search on "harris miller electronic voting." The ITAA will say anything as long as they are paid to say it—that is what they do for a living. That kind of source, unless used for documentation of its nonsense, does not belong in Computer.

Terrence Vaughn, Garretson, S.D.,


The Industry Trends column in Computer's January issue (Steven J. Vaughan-Nichols, "Vendors Go to Extreme Lengths for New Chips," pp. 18-20) includes the following quote: "The best way to increase the number of executed instructions per clock cycle is by increasing a chip's frequency." Last I checked, a clock cycle was the inverse of frequency. So how is it that we're going to increase the number of executed instructions per clock cycle?

This leads to the following conclusions: the source is in error, the writer did not catch the error, and the editor did not catch the error.

I think we can do better to maintain Computer's quality.

Richard L. Lozes, Pleasanton, Calif.,


The topic of the December 2003 The Profession column is one that needs to be aired more often (Neville Holmes, "The Digital Divide, the UN, and the Computing Profession," pp. 144, 142-143). Personally, I don't hold out much hope that people, organizations, and governments that benefit from the current inequities in the use of digital technology can be counted on—or persuaded—to correct them. I think that the needed changes will have to come from the bottom up, with some assistance from people of conscience—folks like George Soros, for example—who also have power and resources to help.

It needn't take much to get a good start. Take a look at the Grameen Foundation (, which has been making microloans to poor people for a few decades now. This organization has helped the communities it serves make considerable progress while creating an entire business community around the idea of microcredit. Its US "branch" is currently starting up a technology center ( that will start by helping microcredit organizations get the basic automated infrastructure that commercial banks take for granted.

Although I don't have any good links to offer, a number of other organizations are working specifically to develop a basic communication and information infrastructure for poorer countries and communities. Maybe it would be worth identifying these organizations in another article and commending them to professionals wishing to help.

Don Dwiggins, Northridge, Calif.,

Neville Holmes responds:

I appreciate receiving this information. I have added the two URLs, plus another pointing to an Economist article on microcredit in India, to the links I provide with The Profession column, which include other examples of providing low-level technical help to poor people ( And if any reader is willing to write a 2,000-word essay on "Microcredit and the Computing Profession," I would be delighted to consider it.

67 ms
(Ver 3.x)