Pages: pp. 6-9
Abstract—Useful insights can be gleaned from the process of collecting many inputs and synthesizing the big picture.
Engineers are in the business of creating the future. We pursue this creative process in two parallel formats: hopes and fears.
We hope to accomplish something good; ideally, the world loves and values our product so much that it rewards us and our company by lavishing money on us. If we fall short of that, well, we hope our new creation is at least profitable.
Meanwhile, as good designers, we fear all the ways in which our product could fail to perform as intended. We try to anticipate usage patterns and also all error conditions. If we misconstrue the usage patterns, our product will be hard to use or will fail in the marketplace. If we fail to anticipate error conditions, marginal operating conditions, or failure modes, the product could be unsafe or it could simply be perceived as less well thought-out than the competition's and be devalued accordingly.
Conceiving a product requires imagining the future, trying to match the technology that will become available to what buyers will want to do with it. Many things can cause this conception phase to go seriously awry.
You can underestimate technology improvements between now and product time. If your competitors get this right and you don't, their product will outperform yours by an embarrassing margin. Overestimating such improvements will cause you to slip schedule while you try to salvage the situation with a technical "diving save."
Incidentally, either underestimating or overestimating the technology is not impossible—amazing things happen when very bright, motivated people find their backs against the wall. They will sometimes see things from that perspective that they never would have seen otherwise. Just don't count on this happening. As the military folks say, "Hope is not a plan."
You can also go way off track in predicting what buyers will want to do with your new technology. There are at least two major ways to get this wrong. You can linearly extrapolate from what people do right now, only to find they've all gone nonlinear in the meantime. Or the converse can happen. Cell phone makers finding that large numbers of users don't seem to mind instant-messaging with their thumbs exemplifies a nonextrapolatable emergent behavior. The audio industry of the 1970s guessing that if stereo was good, quad must be better, is an example of the converse.
There are so many ways to get this wrong that it's tempting to look for ways of avoiding the issue. All those people working two aisles over who have "Strategic Planner" emblazoned on their business cards must have secret ways of divining future directions for your company. You've suspected all along that some of them must have sold their souls to the devil, like Ralph Macchio's character in the 1986 movie, Crossroads. Surely these creatures receive hints from the future in their dreams and have deep insights into where things are heading. After all, knowing these things is their job, right?
Well, sorry to be the bearer of bad news, but that's not how it is. Your coworkers in marketing and planning probably know your corporate roadmap better than you do. You're "only" responsible for one item on that roadmap, but it probably includes 10 or 20 or more items. Their job includes maintaining a reasonably high-level understanding of the big picture. These folks may also have a better idea of what the customers want than you do. But they synthesized that picture by talking to customers. Where did those customers get it? Perhaps Macchio works for them. Don't count on it, though; it's more likely that they guessed, with all the same caveats outlined earlier.
I'm not demeaning the process of collecting many inputs and synthesizing a big picture. Very useful insights can be found that way. The US Department of Defense believed in this idea so much that it formally proposed creation of a "futures market" for people to use in betting on various terrorism possibilities ( www.fcw.com/fcw/articles/2003/0811/pol-misguided-08-11-03.asp).
Similar to the way that the real stock market tends to express the innate beliefs of large numbers of people as to the future of any given company, the idea was that patterns might emerge from a similar "market" that would help defend against future atrocities. Better yet, while financial stock markets explicitly disallow insider trading, this futures market would actively encourage it because it would help the patterns emerge more strongly. But the general public found this notion so macabre and distasteful that the DOD had to back away from it at maximum warp speed.
In the end, product engineers can't rely on anyone else to show them the One True Vision for what they are designing. Input from marketing, planning, and management is valuable and must be sought and included.
But ultimately, the product engineers must believe in what they are doing. They must feel that the world needs what they alone can create, that what they are bringing into existence is so important that it's worth all the missed family time, the lost weekends and evenings, the fatigue, and the feeling that their lives have been on hold for several years. They must "sign up" for the product, and the only way they can truly do this is to have ownership of its charter.
In the 1980 movie The Blues Brothers, Jake and Elwood Blues say they are on a mission from God. Engineers who really believe in what they are doing often feel exactly that same way.
One of the true joys of engineering is the feeling that comes with being on a team where everyone feels that calling, that sense of being an integral part of something much larger. If they expect to produce truly world-class results, engineering companies must actively seek to inculcate this feeling in their design teams.
I once attended a validation review in which the group's manager rather diffidently ran through a superficial list of topics, with no real plan for how to deal with any of them, and smoothly segued into wrapping up the meeting. I objected and pointed out that the number of topics presented was about 2 percent of the total list needing validation, and even that 2 percent had been covered only superficially.
The presenter said, "Look, designers have it easy. You only have to find one way to make something work, and once you've found it, you're done. I'm expected to test so many different features, and in so many combinations, that the heat death of the universe will occur well before any truly comprehensive test plan could be even half completed."
My jaw dropped. I asked, "Did you just tell me you can't do your job?"
"What I just told you is that nobody can do this job. It's provably, mathematically impossible," was the reply.
I said, "Okay, then, I'll make a prediction. No matter what you say, and no matter what you do, my team will design a world-class microprocessor, and somebody will lead the required testing of it. That person will do the best he can, and we'll help him prioritize his limited time and resources toward a level of testing we jointly believe is appropriate for a product we can all be proud of.
"I suggest that your efforts would be better spent figuring out what needs to be done, instead of trying to set things up so you can't be held accountable. You can and you will."
We were never friends after this exchange. Of course, we weren't friends before it, either.
This validation person wasn't completely wrong, of course. A modern microprocessor represents enough complexity that there's no real hope of ever testing it to saturation. At any given moment, show me the list of all the things you've tested so far and, assuming your testing was itself error-free (not at all a given), the bugs just migrate down the list toward the things you haven't tested yet. Validation is very difficult, no doubt about it.
But there are things we can do about the situation. The main thing is to realize that bugs are not randomly distributed in a design as though sprinkled in by a malicious deity. Bugs are designed in by well-intentioned but fallible design engineers, and patterns are often present. For example, complexity, new features, and human hands all cause bugs. Okay, automated tools sometimes breed bugs too, but they often spew them all over the place, so they're not likely to go unnoticed.
While developing the original P6 Pentium Pro, we decided to make the CPU "glueless multiprocessing." This meant that we would be able to put one to four microprocessors in the system, and they would handle all of the cache coherence and initialization duties automatically. However, I knew of many competitors who had tried to achieve this and had a lot of trouble getting it right. All of them had to ship the first few revisions of their chips as uniprocessor-only because they had missed some obscure corner cases of the cache coherence protocol. I wanted to learn from their travails and get it right on the first try.
So I proposed that we hire a new group, unanticipated in the original project headcount, with the explicit charter of getting P6's multiprocessing functionality right. Then we hired people with MP backgrounds, and charged them with the MP mission. They came through beautifully.
And this is where the surprise occurred. During a meeting with an executive in 1996, he said, "You hired an extra 10 heads for the MP feature on P6, and it came out right." I thought, Yes, that's right, you should praise my foresight. I saved you a lot of money, and you're lucky you have me on your team.
But then he said, "Looks like you didn't need them after all." I was staggered. What? No! They are the reason it came out right! He just smiled. I wasn't smiling.
In the US, we have a five-stage color-coded terrorist threat warning system. Green (low) means the year has somehow magically turned back to 2000; blue (guarded) means be careful; yellow (elevated) means be anxious; orange (high) means something evil this way comes; and red (severe) means how well do you really know your spouse?
As head of the Department of Homeland Security, part of Tom Ridge's job is to establish the current threat level. And for the same reason as in the MP saga, he can't win.
If he assigns a color indicating a high threat level and in doing so obviates a particular terrorist attack, nobody knows it because the predicted attack didn't happen. The public only knows that the warning temporarily made their lives more difficult, for no discernible reason.
On the other hand, if the color is anything but red and something big does happen, Mr. Ridge can expect no understanding or sympathy because of the difficulty of balancing imperfect knowledge against communal costs and risk. His predictions affect the very future they try to describe.
In engineering, as in war, we use our proxies to play to strengths and ameliorate weaknesses. But in placing our bets that way, we alter the future. If we predict that a certain feature of a new product is highly likely to be buggy, and we therefore concentrate our efforts on the design and testing of that feature to the point where it becomes the most solid part of the product, does it mean our prediction was wrong?
While on vacation in a beach town this summer, I wandered by the local library, which was conducting its annual fund-raising drive. This fund-raiser consisted of several shelves of books marked for sale at $1 each. Skipping all of the Stephen King, Clive Cussler, and Dean Koontz novels left a small pile that I gleefully scooped into a bag.
One of these books was Visions of Technology (Simon & Schuster, 2000) by Richard Rhodes. About 10 months ago, I read Rhodes's Deadly Feasts (Simon & Schuster, 1998), which covers the prion theory of mad cow disease. That book did such a good job of tying modern beef production methods to mad cow disease that I have been avoiding beef ever since. However, this book won't make vegetarians happier because it also points out that a common garden amendment is blood meal. You do know what that is and where it comes from, don't you?
Rhodes' Visions book surveys a wide range of important, famous, or interesting people who share their personal views on technology in the 20th century. These folks have a lot to say. The following quotes come from this book.
On the nature of predicting, for instance, F.H. Clauser said,
Occasionally, some devilish individual takes the trouble to go back and compare past predictions with later reality. Invariably, he finds that engineers and scientists are a conservative lot in their predictions. The immediate problems that confront them appear so formidable that they flinch from predicting ever-accelerating progress and conjure up visions of a natural barrier ahead which will cause the curve of progress to flatten off….
Indeed, it's this sentiment that most gives me pause when I argue that Moore's law is significantly slowing. But I still say it because … it is.
On the impact of radio at its dawning, M.H. Aylesworth, president of the National Broadcasting Company, explained, "People in all countries of the civilized world, hearing the same programs—music, speeches, sermons and so on—cannot fail to have a more friendly feeling for each other."
With the benefit of 70 more years of experience, I think it's safe to say Mr. Aylesworth was only partly right. What he may not have foreseen is that what a culture chooses to broadcast through its mass communications media is often not the best that culture has to offer. I shudder to think what cultural impression is conveyed to someone who only sees MTV, sitcoms, or Superbowl half-time performances replete with "wardrobe malfunctions."
In The Lexus and the Olive Tree (Anchor Books/Doubleday, 2000), Thomas Friedman sees a pattern similar to what Aylesworth envisioned. He noted that no two countries with McDonald's restaurants had ever gone to war with each other; perhaps by now some have, but certainly not many. Besides, one or even a few exceptions aren't nearly as interesting as the basic premise that capitalism and mass consumerism may be more important influences on foreign policy than, say, the United Nations.
Charles Lindbergh said,
Whatever a man imagines, he can attain if he doesn't become too arrogant and encroach on the rights of the gods. Is aviation too arrogant? I don't know. Sometimes, flying feels too godlike to be attained by man…. In developing aviation, in making it a form of commerce, in replacing the wild freedom of danger with the civilized bonds of safety, must we give up this miracle of air? Will men fly through the sky in the future without seeing what I have seen, without feeling what I have felt? Is that true of all things we call human progress? Do the gods retire as commerce and science advance?
Lindbergh raised an excellent point, and I propose that we've seen more than enough technology at this point to answer him: Yes, humans imagine and create, and those who use their creations soon relegate them to the status of kitchen brooms, taken for granted except when they malfunction or otherwise fail to satisfy their owners' ever-increasing expectations. If there is a kitchen broom god, she gets busier every day.
In 1967, Emmanuel Mesthene said, "… we must not blink at the fact that technology does indeed destroy some values. It creates a million possibilities heretofore undreamed of, but it also makes impossible some others heretofore dreamed…. Mass production puts Bach and Brueghel in every home, but it also deprives the careful craftsman of a market for the skill and pride he puts into his useful artifact."
Mesthene is absolutely right, in general. There are still pockets of production that have remained the domain of the solo craftsman: A high-end luthier, for example, routinely produces guitars that are far superior to the best product of computer-controlled automation.
May it ever be thus. In fact, I predict it.