The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - March (2009 vol.42)
pp: 6-7
Published by the IEEE Computer Society
Green It Column
I'm thrilled that Computer has added the Green IT column (Jan. 2009, pp. 101-103). Green will be the next Internet and will certainly be enabled by computer technology.
But after reading the first installment I'm oddly unmoved. The author cites statistics that show server farms consume 1.2 percent of US electric power, and that 2 percent of the solid waste stream is from consumer electronic devices. Those numbers mean nothing in isolation.
Does the use of these marvelous devices reduce travel, for instance, netting an overall improvement in the environmental landscape? Are there other beneficial second-order impacts?
Further, where does the other 98 percent of solid waste come from? Although I'm all for doing anything we can to tame the mountainous landfills, the impact of even completely eliminating this e-waste is in the noise. But there must be some stunning opportunities in that 98 percent, areas where some smart computing could cause significant reductions in disposables. And considering just how much power is wasted through inefficiencies, I suspect that cleverly cranking a few more electrons through some silicon could cause drastic drops in electric usage.
To my mind, the forthcoming green/energy revolution is tremendously exciting. I hope young people will be inspired by it to go into science and engineering, just as Apollo inspired my generation.
I look forward to future installments of this column.
Jack Ganssle
jack@ganssle.com
The author responds:
Thanks for your perspective—you make some interesting points. According to the EPA, server farm energy usage is comparable to that of TV sets in the US and rising. See www.energystar.gov and search on "servers" for related reports.
In my article, I was trying to point out that electronic waste is a concern for its growth rate and toxicity. As you point out, efforts to curb the other 98 percent additional waste are critical. For more information on other forms of waste, please see the "Municipal Solid Waste Characterization Report" also available at the EPA's website. The second-order effects of computing on waste are certainly important and may make for an interesting future column.
Thanks for reading and voicing your opinion.
Kirk Cameron
kirk.w.cameron@gmail.com
Cloud Computing
Reading "Is Cloud Computing Really Ready for Prime Time?" (N. Leavitt, Technology News, Jan. 2009, pp. 15-20) rekindled my memories of when I entered the computing field as a technician for mainframe operation in the mid-1980s.
Defined by their size, early mainframes were large enough to fill a room, and purchasing and maintaining them cost millions of dollars. Those immensely powerful computers were quite reliable, routinely running around the clock for years at a time. Our team of technicians worked in 24/7 shifts to service the big machines. Users were connected by dump terminals; huge line printers produced printouts. Only two decades later, things have changed dramatically.
Without question, many large organizations still use mainframes. Unlike PCs, these machines have high reliability. Their software is vastly more stable and reliable than the software that runs on desktop machines. They can process immense amounts of information, making them perfect for banks, airlines, or any organization that must track millions of transactions.
While the technology may be updated, making it faster, more cost-effective, pervasive, and scalable, cloud computing is essentially a centralized mainframe-like core with distributed nodes.
Without doubt, many of the tasks that previously were mainframe workloads are now being done more cost-effectively in the client-server environment to which most of the business world has been moving. Nonetheless, mainframes have maintained a market niche in global computing, evolving over time to support more operating systems and applications.
As cloud computing services can and will be provided at any layer of the IT computing stack, from raw compute services to business process services, I believe this technology will eventually take on the role of mainframe computers.
Hong-Lok Li
lihl@ams.ubc.ca
The Credit Crunch and the Digital Bite
I agree with Neville Holmes that we will have to use system engineering principles if we expect to solve the current economic problem (The Profession, "The Credit Crunch and the Digital Bite," Jan. 2009, pp. 116, 114-115). In fact, there already is extensive information on how systems fail, if only we would look at it.
In To Engineer Is Human: The Role of Failure in Successful Design (Vintage, 1992), Henry Petroski noted that, as systems evolve, engineers reduce safety factors in an effort to increase performance. With extensive knowledge of existing systems and good analysis of their performance, identifying and eliminating unnecessary margins does not create a problem. Occasionally, however, the system becomes vulnerable to an unrecognized or poorly understood failure mode. This is when disaster occurs.
In the current economic crisis, the people engineering the system, a.k.a. politicians, were making modifications to the system to improve one measure of performance: the percentage of home ownership in the US. Normally, new entrants into the workforce spend their initial years paying off debts so that they can qualify for a mortgage and save money for a down payment. This delays when they can buy a house and depresses the home ownership percentage. Laws like the Community Reinvestment Act required banks to consider customers who had not yet completed the process of paying off debt and saving. Giving these new customers mortgages increased the home ownership percentage, for which the politicians were happy to get the credit. Unfortunately, it also increased the risk of mortgage loan defaults.
At this point, the system was made more vulnerable by what is known as suboptimization. In a suboptimization, a subsystem is optimized in a way that reduces the performance of the overall system. An example is increasing the throughput of a transaction processing system so that the increased number of transactions overloads the underlying network. Something similar appears to have happened in the mortgage industry when banks started packaging and selling mortgage-based securities. These securities allowed banks to spread the risk of default among more parties. Normally, more people sharing risk, and being paid for it, is a good thing. Here, it was also used to increase the overall level of risk within the system. No single individual was assuming excessive risk, but because risk was more widely distributed, the system was more vulnerable.
An analysis like the one Holmes provides and the one that I have given can be helpful, but it remains to be seen whether those in charge want to listen. It is human nature to look for someone to blame other than oneself. Taking a system view will force the keepers of the system to admit that they had a part in causing the problem.
Victor Skowronski
victor31@ieee.org
We welcome your letters. Send them to computer@computer.org.
28 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool