The Community for Technology Leaders

From the Editors: Ask the Hard Questions


Pages: pp. 3-5

This editorial message will be a bit more technical than an editorial message really should be. I beg your indulgence because I have an important point to make.

For particular classes of problems that we can state as generalizations of finding the period of a discrete function or locating an item on a discrete list, quantum computing promises a dramatic speedup over classical machines. Quantum computing has had a stimulating effect on the theory of computing, generated some excellent thinking about the foundations of quantum mechanics, and justified some elegant experimental work in applied physics. But, as was probably inevitable, it also has generated some myths. I think the myths actually affect public perception of what's possible in computing technology and, by implication, can lead to wild overestimates of what to expect from computational science.

On the most sophisticated level, dispelling these myths is not so easy, because they grow out of difficulties in understanding what quantum mechanics really says about the nature of the physical world—a topic that has troubled even the greatest scientific minds of the past century. However, on a more elementary level, we can explode two of the myths fairly easily. The easy myths are

  1. Quantum computing is more efficient than classical computing for all possible computations, meaning more results per unit work (the "something for nothing" myth).
  2. Quantum computing exploits quantum parallelism in a way comparable to, but better than, classical parallelism (the "each n-qubit quantum state is 2 n classical bits" myth).

To take care of the something-for-nothing myth, let's start with a small logic puzzle that's probably several thousand years old. A confused and weary traveler stops at a crossroads in a strange country. He wants to get to the village before nightfall, and he knows that he must make a turn at this crossroads, but which way should he go? The country is strange in more ways than one. Its inhabitants will answer only one yes/no question per day, and all inhabitants fall into one of two disjoint sets: those who always tell the truth, and those who always lie. So, what question to ask?

The right question for our traveler to ask is a double negative: "If I were to ask you if a left turn is the way into town, would you say 'yes'?" This puzzle's interesting feature is that the answer to one question gives information that might seem to require at least two questions, but this is only an illusion. The traveler doesn't learn if he's talking to a liar or not. It's just that his question gives him the single piece of information he needs.

Readers familiar with Deutsch's algorithm from quantum computing might find this scene hauntingly familiar. In the case of Deutsch's algorithm, a single query tells whether a function F: {0, 1} → {0, 1} is constant or not, meaning it takes on one or two values. However, this single query does not tell us which of the four possible F's we're looking at; it just tells us if F is in the class "takes only one value" or the class "takes two different values." In other words, only one question is asked, and only one is answered. In the case of the more general Deutsch-Joza algorithm, it's sometimes claimed that one of 2 n possibilities is decided with only one question asked. This is an illusion too, although seeing why it's an illusion is a little more difficult. In short, we're told at the outset that the function is one of two types, constant or balanced, and one question determines which type it is.

The number of queries is what we often count when reporting quantum algorithm performance, but in between queries, unitary operators are applied to produce entanglement, and entanglement sets up the situation in which queries give the answer to the question of interest. However, the amount of entanglement produced for each application of a unitary operator is bounded and this, in turn, puts a lower bound on the possible speedup. In some cases, we gain virtually no advantage from quantum operations.

It's as if our traveler started out in a country in which everyone lied some of the time, and by asking questions and applying operators, the country gradually changed to a completely yes/no place. However, in some of the commentary on quantum computing, the application of unitary operators isn't included in the operations count. Of course implementation of the unitary operators has everything to do with the gate count—the complexity of the machine that would actually execute the quantum algorithm.

The second myth, confusion about the meaning of n-qubits and the relation of that meaning to parallelism, arises from the notion of superposition of states. In classical statistical physics, quantities are defined and determined as averages of large ensembles of preexisting values. For example, if we say that a monatomic gas with Boltzmann distribution of energy is in equilibrium at some temperature T, we're making a statement about the average (expected value) of the individual atoms' energies. We assume that the individual atoms have particular energies that exist independent of whether or not we happen to measure the temperature. In quantum physics, some of the same words are used, but their meaning changes and this difference is all-important. A superposition of n-qubits is not a "parallel" collection of values or an average of preexisting particular values, even though measurement of a state is called determination of the expectation value. In fact, if we assume that states do stand for preexisting values, we arrive at a contradiction of the basic mathematical properties of three-dimensional space. And if we try to get around the contradiction by restricting ourselves to rational numbers, we contradict quantum mechanics itself.

Here's another old saying appropriate to quantum computing's development: Question everything, and hold fast to what is true.


I truly appreciated reading the article in September/October's CiSE about the experience of the Sloan Sky Survey in moving from OODBMS to RDBMS ("Migrating a Multiterabyte Archive from Object to Relational Databases," vol. 5, no. 5, 2003, pp. 16-29). It touched on an important issue that I have been wrestling with, on and off, for the better part of the past decade.

I, too, am a scientist working in the defense industry to develop advanced infrared focal plane arrays for space-borne platforms. We acquire enormous amounts of data in our work, and for a long time now, I have felt we should move to a database system instead of the numerous ad hoc solutions scattered throughout the company on various projects. It is a land of Babel at the moment...

Object-oriented database management systems (OODBMSs) have always seemed a more natural fit to the way we scientists think about our data, whereas relational database management systems (RDBMSs) have always seemed more appropriate for business record-keeping chores. The article by Ani Thakar, Alex Szalay, Peter Kunszt, and Jim Gray touched on some important shortcomings of current OODBMS technology.

Sadly, the article's tone gave the impression that the authors continue to believe as I do in the fundamental mismatch of RDBMS technology for scientific data repositories. It appeared that OODBMS's difficulties have more to do with vendor incompetence and noncompliance than with anything fundamental within the approach. Perhaps unintentionally, the authors also gave the impression that they chose to sidestep the issue of a mismatch, pushing such an obstacle back onto the scientific user. They stated that users didn't find it so bad when they used a few specialized SQL macros. No doubt the authors invested a heavy effort in this side of the issue, but it was given short shrift in the article.

I would like to see more details about the effort involved with treating scientific data and object-oriented viewpoints with RDBMS technology. Whenever I contemplate such a migration, I get a very bad feeling in the pit of my stomach.

David McClain, Senior Scientist, Raytheon Missile Systems

The authors respond:

We did not intend to sidestep the issue of mismatch; rather, our point was that in the world of multiterabyte (soon multipetabyte) archives, the primary considerations are performance and ease of use. The conceptual mismatch is a secondary issue. RDBMSs offer far superior performance and features today than their object-oriented counterparts. Another point we tried to make was that Internet standards for data interchange have advanced to the point where the mismatch can be largely alleviated with proper packaging of the data (using XML schema, for instance).

To summarize:

  • The conceptual mismatch between scientific data and relational tables is of secondary importance to the data-mining performance and features the DBMS offers. Modern RDBMSs deliver both performance and features, whereas OODBMSs have not kept up with the demands of data-intensive science.
  • The mismatch problem is not as bad as we had originally thought. As we described in our article, we were able to translate our object data to relational tables without too much trouble. We also added several stored procedures and functions to encode methods within the SQL database.

Indeed, our main conclusion is that scientists can and should migrate from object to relational databases if they are to achieve their data-mining objectives on terabyte-scale databases.

Aniruddha R Thakar, Assoc. Research Scientist, The Sloan Digital Sky Survey

60 ms
(Ver 3.x)