Rethinking Formal Methods
I enjoyed reading "Really Rethinking 'Formal Methods'" by David Parnas in Computer's January issue (pp. 28-34).
Many people working in the area of computer science, but having an engineering background—for example, I am an electrical engineer who focuses on computer engineering—have been thinking about, discussing, and writing papers about the same issues that Parnas discusses for a long time. As Parnas points out, people working in the real world do not use formal methods because they do not provide the solutions we seek.
It is important to point out that not all of the mathematics that has been used in computer science belongs to formal methods. An example that uses exactly the "engineering-like approach" that Parnas refers to is the theory of space-time mapping of nested loop algorithms onto hardware. In this theory, the results, that is, the mapping, are calculated similar to the way that engineers make calculations, using systems of equations and solving them. But this approach was overtaken by "formalists," and now only small groups are working in the area of space-time mapping.
The approach in which computer scientists use mathematics to calculate the results/solutions as engineers in various other fields do deserves more attention and should be developed further.
In "Really Rethinking 'Formal Methods'," David Parnas makes several provocative points about the impediments to industrial formal methods. While many of his criticisms of formal methods research have some validity, they do not support his claim that industry has failed to adopt formal methods.
Parnas's claim about the lack of industrial adoption seems to stem from two parochial viewpoints. First, Parnas focuses on older approaches; he notes that industry "chooses not to use methods such as Z or VDM"—methods developed more than 30 years ago. It is correct to note that these techniques do not satisfy industrial needs, but they blazed the trail for more advanced approaches, such as model checkers and satisfiability modulo theories solvers, that companies from Rockwell Collins (see Comm. ACM, Feb. 2010) to Microsoft are developing and using for software verification. As Bill Gates noted at the 2002 OOPSLA conference, "For many very interesting properties this idea of proof [of programs] has come a long ways in helping us quite substantially … ." ( www.microsoft.com/whdc/devtools/tools/sdv-case.mspx).
Furthermore, Parnas neglects the technology transition from formal methods to modern programming languages. Corporations ranging from Eaton, a major industrial manufacturer, to Credit Suisse, a financial services company, have adopted strongly and statically typed functional languages—declarative languages with similarities to the Z and VDM notations—eliminating classes of software bugs. In other words, yesterday's formal methods research developments inspired today's software language features.
While Parnas's argument that industrial adoption of formal methods may not be accurate, some of his criticisms of formal methods research have merit. For example, his point that theoretical computer science sometimes forgoes classical mathematical approaches is well taken. As one example, he notes that new logics have been developed to reason about real-time software, while in control theory and circuit theory, "time is represented by an additional variable that is not treated in any special way." While true, recent approaches have anticipated Parnas's criticism. See, for example, "Real-Time Model Checking Is Really Simple" by Leslie Lamport ( http://research.microsoft.com/en-us/um/people/lamport/pubs/charme2005.pdf) or "Easy Parameterized Verification of Biphase and 8N1 Protocols" by Geoffrey M. Brown and Lee Pike ( www.cs.indiana.edu/~lepike/pubs/bmp.pdf). Contemporary research is also addressing his other criticisms.
While a complete rethinking is not necessary, Parnas's point remains: "Good methods, properly explained, sell themselves." And industry is buying.
The author responds:
I am not sure what point Dr. Pike is making. None of his observations refutes (or even rebuts) the main observation in my article, namely that the routine use of formal software development methods remains rare after more than 40 years of effort.
• It is true that I failed to mention "more advanced approaches, such as model checkers and satisfiability modulo theories solvers." There are hundreds of interesting techniques that I did not mention. They are valuable research results, but they do not constitute a software development method that is ready for widespread use. It is quite imaginative to assert that methods such as VDM and Z "blazed the trail" for model checkers and other decision procedures. The roots of that work are much older.
• It is true that there have been industrial applications of formal techniques such as the very recently published one that Dr. Pike mentions. As I wrote, the fact that these are research efforts, and the experience reports considered publishable, supports my claim that use of such methods is uncommon. Further, many methods check design models, not deployed code. Others are partial checks on existing code, rather than ways to develop provably correct code.
• It is true that many companies have funded research on formal methods; they began doing that at least 40 years ago. There has been limited success in some tractable areas such as protocol analysis, but application of the methods is far from "standard practice."
• It is true that some companies have used a nonstandard programming language on some projects and that some of these languages are clearly inspired by mathematics. The resemblance between these languages and the best known "formal methods" indicates that they have common roots—not that one inspired the other.
• It is true that many of the alternatives that I suggested were previously suggested by others. I raised those issues, not because I think they are new, but because I think they deserve more consideration than they get.
I hope that someday we will have formal methods that are sufficiently effective and practical that software developers routinely use them in the way that engineers in traditional disciplines use differential and integral calculus. We have not made much progress toward that goal, and researchers must ask why.
The authors of "Online Security Threats and Computer User Intentions" (T. Stafford and R. Poston, Jan. 2010, pp. 58-64) raised a good point in stating that "Internet users thus seem to understand that spyware is a very real danger to their personal computers and information, yet most do not take protective actions against it." Spyware has become so common that it has moved ahead of viruses as the number one danger facing computer users today. Although trusted antispyware programs may help protect our computers, online security is far from a sure thing.
It is equally important to educate users to take precautions when they download files from the Internet. Music file-sharing programs are a major potential threat. Certain downloads such as free games, screen savers, and even smiley face packages can cause significant problems too. Users should choose their passwords carefully and change them periodically to prevent unauthorized access to their information.
Maintaining online security only at the user level is insufficient. Technology changes too fast to allow any final solution. The next advance always supersedes the endurance of any technological progress. For IT managers/network administrators, in addition to deleting the passwords of exiting employees and preventing malicious interception of data transmissions, deployment of the latest encryption schemes is critical because obsolete data encoding invites criminals to steal information that can be easily read through spyware.
Unlike desktop antispyware, gateway antispyware is a proactive measure that serves as a strategic point for scanning traffic and protecting an entire network against Internet-borne malicious software. Preventing spyware from entering the network and being installed on individual computers will substantially reduce the spyware threat.
Gateway scanning for spyware, however, requires high-performance technology that can handle real-time Web traffic for large numbers of computers, which could create a network bottleneck. An alternative better suited to the nature of real-time Web traffic would be a streaming-based approach in which the receiving, scanning, and outputting processes operate concurrently, minimizing processing time and ensuring good network performance.
The authors respond:
The point is quite well taken that user-based security measures against spyware threats are increasingly inefficient, due either to the increasing pace of threat proliferation, as the writer suggests, or to waning user motivations to act proactively, as my own work begins to imply. The suggestion that user-based protection applications are an insufficient solution to the entire problem that malware represents in modern networked computing is, however, potentially misleading.
It is true that user-side solutions are fraught with challenges. My research has made me realize that many users are simply not responsive to security appeals. But it has also made me aware that every step must be taken to institute security measures in view of the increasingly ominous threat that malware represents. Hence, the synthesis of my view with that of the writer would be: The ideal approach is user-side protections merged with a robust gateway protection model.
The sad fact is that many computer users don't feel particularly threatened by malware. Thus, as suggested, the proactive approach of using gateway protection measures to scan and filter traffic at the server level is not only sensible, it is likely essential. This should not be construed to mean that it is no longer important to promote protection among users. And, as the writer aptly notes, the sort of gateway applications that will serve to actively filter malware attacks from networks are resource-intensive, both in terms of operation and development. However, in view of the growing complacency that users seem to be exhibiting, I think that, in the final analysis, we will see more of this sort of macro-level proactive security. Either that, or we'll be soon adapting a version of Scott McNealy's famous privacy statement, "Computer security is dead. Get over it."
"A Discrete Stock Price Prediction Engine Based on Financial News" (R.P. Schumaker and H. Chen, Jan. 2010, pp. 51-56) omitted some important aspects of this topic.
An old IEEE Transactions showed that the stock market is completely random. In essence, success is based purely on luck.
Did they make a mistake?
I suspect that even empirical "success" would just be a random test of something akin to a truncated simulation of the St. Petersburg paradox. Isn't any short to medium run of good results just successful gambling?
So what's the true benefit of such a system over the long run?
Granted, inside information can make stock trading a sure thing. But when using only public knowledge, there is no guaranteed way to gain an edge.
What is needed is a system that can anticipate the news and act in advance.
Anyone who is still unconvinced should read the excellent book, Where Are the Customers' Yachts? or The Black Swan (Random House, 2007).
If there were any system that worked, everyone would use that system and it could no longer work considering that the market is a zero sum game, except for the commissions.
If the method described actually did work at all, it would only work for the early traders, so we are back to the inside information issue.
I suggest that the authors investigate the impact of priority execution of trades versus the system used to see if that might not be a larger factor than news or rumors.
Another potential problem with buying and selling after 20 minutes is the accuracy of the information. Are they going to trade on rumors from the Internet? Will they take time to vet the news stories before trading? Wouldn't the vig that the brokers charge make this system too expensive for those who can't trade for themselves without a fee?
I have to believe that the professionals already are factoring in legitimate news even if not done the way this article describes.
Finally, the authors need to address the obvious question of why correlation, a.k.a. hidden patterns, equates to causation.
Likewise, wavelets and other sophisticated math are impressive, but can anyone prove that they are truly meaningful here?
Serial Computing is not Dead
The cover feature articles in Computer's Dec. 2009 issue make interesting reading, and indicated that the computing field is seriously addressing the parallel programming issue. However, I assume that the guest editors didn't really mean serial computing is going to be irrelevant with their remark, "… serial computing is dead" (W. Feng and P. Balaji, "Tools and Environments for Multicore and Many-Core Architectures," pp. 26-27). My experience indicates it ain't going to die. Ever. If the serial computing in each thread/core/process is bad, no amount of parallel processing is going to save the day.
As the editors point out, the computing field was spoiled by incessant "cranking up" of CPU clock speeds. As a result, we haven't invested in improving the "problem-solving skills" of "assembly line programmers." While "automated parallelization" could help with multicore and many-core architectures, we need to invest more in training the programmers to "think parallel."
Raghavendra Rao Loka
The authors respond:
Indeed, serial computing as a paradigm is not dead. However, the context of our statement that "serial computing is dead" was with respect to improving performance. Many papers have shown that we have effectively hit a wall with respect to single-thread performance, largely due to the stalling of increases in the clock frequency.
Reading "Personal Skills for Computing Professionals" (The Profession, Oct. 2009, pp. 112, 110-111) made me wonder whether some additional skills should be added to the list. I would name them as "openness to different opinions" or "capacity for accepting unusual suggestions."
The column author noted that, despite efforts to make changes with regard to gender-related issues, women remain a minority in the computer science and electrical engineering communities.
In a recent e-mail discussion related to improving IEEE conferences, I offered some suggestions for enriching the social life of attendees. My idea was to attract prospective young female conference participants, that is, undergraduate students, by offering opportunities to participate in shopping tours, beauty treatments, visits to local cosmetics manufacturers, or similar events. I had in mind a pleasant experience some years ago when the organizers of an IEEE-related gathering of undergraduate and graduate students in southeastern Europe offered similar activities, and the participants were thrilled to participate in them.
Of course, such activities must coordinate between the scientific program and available free time. But it is important to offer more alternatives for the younger generation of researchers-to-be, especially for participants who come from Third World countries. For a young person, making such an expensive trip needs to offer more than presenting a paper and receiving a bag containing the conference proceedings. Attendees should also have opportunities to explore the cultural or historical aspects of a conference venue and its surroundings.
However, it appears that a couple of established female scientists on the forum found my proposals insulting. I apologized for the misinterpretation of my well-intentioned suggestions and even tried to introduce a relaxing option for male researchers by proposing a "fishing afternoon" after the parallel sessions, including digging for worms in the mud, but it did not help.
Although I am aware of differences in cultural context in the different places we live in the world, what is common is the need to attract more "fresh blood" to our old-fashioned engineering and technology world.