Pages: pp. 6-7
The authors of "Creating and Reading Realistic Electronic Books" (V. Liesaputra, I.H. Witten, and D. Bainbridge, Feb. 2009, pp. 72-81) miss the point with regard to the appeal of electronic books. The issue is not style and appearance—it is function.
Access to electronic books requires a device—today, typically a computer. The first step is to log on to read the book. This, for the most part, limits access to one location. Would anyone tote around a laptop just to access an electronic book while waiting at the doctor's office or the airport?
Then there is the limited visual space. Most computer monitors display at most one-half of a conventional 8-1/2 by 11 page at a time with a reasonable font size.
Other limitations include the access speed and seeing only one screen at a time. With a physical book or magazine, I can quickly skim through the pages to find what I am looking for. I can also quickly go from the table of contents or index to the page I want to find. In most cases, electronic retrieval of the page I am interested in is too slow.
For a start, electronic books need a topic-oriented search. Currently, searches are word-oriented, not topic-oriented, thus limiting their usefulness. Often I have to trudge through dozens of word appearances in a book to find the topic I am actually looking for. Page turning styles and peeling geometry are just window dressing.
Electronic books need more functionality.
The authors respond:
We concur with many of Gerald Marsh's points, and we certainly agree that electronic books need more functionality.
Searching is one obvious extension. Since writing our article, we have designed a search function and conducted user evaluations that show that for some technically oriented tasks, realistic books compare favorably with the real thing. We have also designed and implemented a capability for both typed and free-form annotation. These features can be seen in our examples at www.nzdl.org/books.
Admittedly our current search function is word-oriented, not topic-oriented, as Gerald Marsh suggests. But topic-oriented search, powered by the 2 million (and growing) topics within Wikipedia, is around the corner: We have a proof-of-concept prototype and expect to be able to demonstrate it within our books soon.
Give electronic books a little time: Real ones have been evolving for millennia.
Because Computer is a professional journal, its reviewers and referees are responsible for ensuring that the manuscripts they accept for publication present original work. Regarding "Creating and Reading Realistic Electronic Books," it is important to point out that the Library of Congress and the National Library of Medicine developed page-turning software as a joint effort about 10 years ago. Those involved in the development, both from the LoC and the NLM, have departed from these institutions over the years, resulting in a dearth of institutional knowledge regarding the background of the software's development.
A sample is available through the LoC's website: http://loc.gov. A direct link is www.loc.gov/flash/pagebypage/buccaneers/bookBorder.html. When viewing the book, users will note that the application also provides an audio translation of the text on a page-by-page basis, a feature that was included to accommodate people served by the National Library Service for the Blind and Physically Handicapped ( www.loc.gov/nls), which is a public service of the LoC.
I understand that, when funding becomes available, the LoC has plans to digitize much of its rare book collection. One problem, aside from the lack of funding, is that the digitization process is lengthy, as the rare books must be scanned with low light, requiring up to six hours per page.
Paul D. Lane
The authors respond:
As we mentioned in our article, the page-turning method in Realistic Books has been used before for many page-turning demonstrations, although we did not single out the work at the Library of Congress.
What is novel about our work is that (a) it provides more visual cues and navigation facilities, for example, by showing the page edges on both sides and allowing readers to randomly access parts of the book by clicking a page edge; (b) we compare reader performance with other electronic presentations, and with physical books, through an objective user evaluation; and (c) we give end users a way to make books from their PDF or HTML files using the open source software we provide.
We also wished to rectify the dearth of knowledge about such techniques, which, as Paul Lane points out, sometimes extends even to the institutions that helped pioneer them.
The February article about the consolidation of search engines was very informative (A. Mowshowitz and N. Kumar, The Profession, "And Then There Were Three," Feb. 2009, pp. 108, 106-107). The authors were wrong, however, about there being "no direct evidence of bias." The citizens of the People's Republic of China cannot view any webpages that disagree with their government's views on Taiwan and Tibet. This makes me suspicious of proposals to remove bias through government regulation.
Government regulation would have the effect of moving attempts to bias search results from the financial to the political. Instead of paying a search engine for a better search ranking, a company or special interest group would lobby the government agency entrusted with regulating search engines. If creationists can get Intelligent Design included in a school biology curriculum, they should be able to get ID webpages displayed prominently when searching on "theory of evolution."
Technology may already be providing a solution, however. With the advent of cloud computing and open source software, it may be possible for most people to have their own personal search engine. Biases would still be present, but the individual would be inserting them. Including your personal biases might even be the feature that leads to the spread of personal search engines.
The authors respond:
The example of the blocking of search results by a state actor clearly suggests that search engine bias poses significant risks to individuals and societies. Our argument applies to liberal democracies where nuanced intervention is needed to protect the public interest. The current state of the financial industry attests to the consequences of failing to do so.
The article makes the case that the oligopoly in the search engine industry threatens the public interest in subtle ways and proposes two alternative approaches to deal with the problem: stimulate competition in the industry and impose fairness standards on search engine operators. Neither one would involve "nationalizing" the search engine business. Stimulating competition means providing incentives to start-up companies; fairness standards would require companies in the industry to demonstrate compliance.
The imposition of fairness standards could be accomplished with or without the establishment of a specialized government agency. Legislation mandating compliance with standards could leave enforcement to search engine users. The "self-help" approach is not uncommon in US law.
It is rather doubtful that technology will come to the rescue, as Victor Skowronski argues. Even if personal search engines were widely adopted, Web users would still depend on general search engines for access to the broad range of information on the Web. So long as the broad-coverage search engine business remains an effective monopoly/oligopoly, the public interest will require action to insure against biased search results.
In "Virtual Walls" (The Known World, Mar. 2009, pp. 8-10), David Alan Grier refers to my moving "a government machine." That machine was SEAC, the first fully operational stored program computer in the US, which we designed and built at the National Bureau of Standards in April 1950.
In 1954, SEAC had been in highly successful 24/7 operation for four years by government, industry, and academic scientists and mathematicians, during which time it was maintained round the clock by the people who had designed and built it. Using both marginal testing and debugging programs, we confidently believed that we could keep the computer running at all times with no more than a single intermittent fault that could be detected by the various testing methods we used.
But when I had to move the computer to a new location on the NBS campus, I feared that preventing faults in the 1,200 vacuum tubes, 12,000 germanium diodes, and thousands of solder joints was not possible. Thus, all the maintenance techniques used so successfully over the past four years would be ineffective, and SEAC would not survive the move.
I was very mistaken, as we discovered when the move was completed. However, in reconnecting the various components, I found a wiring error that had been present from the beginning such that I could write a program that would always fail. Obviously, this error had never been detected before. Of course, I fixed the error, and SEAC continued successful operation for another 10 years.
My experience with SEAC—an apparently perfectly functioning computer that in fact contained a logic error—led me to create a postulate in computer theory. The Kirsch postulate was likely true in 1950 and is many millions of times more likely to be true today: All computers are always, in some sense, "broken."
A video discussion of SEAC history by four of the original designers can be seen at http://video.google.com/videosearch?q=seac+history.
Russell A. Kirsch