Pages: pp. 3-4
This past spring, I wrote a first word column commenting on a decision by our editorial board to begin experimenting with CiSE activities on the web. I appointed a small ad hoc committee to handle this task, and the group is examining what we might feasibly try. Even before I hear what they have to say, though, I want to share some thoughts about my own recent run-ins with information technology. It's a cautionary tale that starts with a recent visit to my CiSE-Editor@aip.org email account.
As with many prior visits, I had delayed this one because I didn't want to face the spam and well-intended but inappropriate technology announcements that this account accumulates. Expectations of such "clutter" convinced me to set up this mailbox in the first place, to separate it from my professional and personal accounts. When I first started my term as EIC, I got an insignificant number of legitimate email messages, so the costs of not visiting the account were insignificant. My latest visit proves this is no longer the case: in addition to having to write apologies for "justice delayed," I've missed some valuable opportunities to do work in the pastures of the scientific computing community.
So, in part, this is a public apology and an occasion to make an early New Year's resolution to mend my ways. It's also a justification for some deeper thinking about my own credulity about the wonders of the Web for networking. I'm a big believer in using the Web to support collaborative work for both research and professional development. Early on, even with somewhat clunky software and a lack of experience in how to organize work activities to fully exploit the possibilities, it was fairly clear to me that collaboratory technology had a future. I find that small groups of professionals with a good work ethic and high motivation to solve a problem together benefit the most from such collaboratories. My subsequent experiences with educational collaborations taught me another important lesson: this works only if users' mindsets are prepared for it to work. Important parts of a well-prepared mindset are confidence and discipline. Work habits must change to exploit the technology and to suppress the temptation to blame it rather than learning to use it effectively. (Naturally, this assumes that the technology is designed sufficiently well to be capable of being effective.)
My experiences with restricted, professional email lists for which participants are screened and discussion moderated further support this conclusion. These lists use a well-understood medium—email—and transactions are disciplined, so the cost (time) is low and the benefit (usefulness) is high. I'm not sure if or how these qualities and my conclusions will apply to CiSE's putative web-based services.
My mixed experiences with information technology also prompted some musings on the connotations of the word "web." The most obvious reference is to connectedness, and who could think ill of that? but another reference is to entanglement—and there's the rub. My aversion to reviewing my editorial email was related to the perceived cost/benefit of wasting time versus timely response. If we build web services, people might not come if they aren't efficient, even if they're useful.
Of course I welcome you, our readers, to offer suggestions about what might be useful for us to implement online. The committee and I are slated to discuss their findings during Supercomputing 2007. Two members have promised to write a pair of At Issue articles about all of this to appear in CiSE. And keep those cards and letters coming—I promise to check for them more faithfully.
Judy Cushing joins our editorial board as an expert in the areas of scientific databases and information systems, especially in ecology and natural resource management. She currently serves as a member of the computer science faculty at Evergreen State College, where she teaches software engineering and works on projects ranging from medical, hospital, and epidemiological systems to computational ab initio chemistry, molecular biology, and ecology. Cushing has a PhD in computer science and engineering from the Oregon Graduate Institute. Contact her at email@example.com.
I'm afraid I didn't find the article "Why Fortran?" (vol. 9, no. 4, 2007, pp. 68–71) very useful, but I would appreciate a more detailed comparison by the authors in the future. I'm one of many people who used Fortran years ago but switched to C on a PC for use in computation and instrumentation. I'm interested in the developments in Fortran95, but I'm not sure if this article was aimed at me or if it was intended only for users of massively parallel computers. I have a few questions:
Additionally, the method the authors described to give Fortran95 polymorphism seemed like a kludge to me: we all do what we have to do, but I don't want to make life easy for the compiler, I want the compiler to make life easy for me. I also found the criticisms of C++ using decade-old quotes made the authors' arguments less than compelling. In this magazine and others, we've seen tremendous gains in efficiency by C++ to the point where it often matches Fortran code. That doesn't mean we should all be using C++; it just means we need good data about different languages before making decisions.
I hope this was a first article by the authors and that they address some of these questions in future articles.
My coauthors and I wrote the article "Why Fortran?" in response to a request by EIC Norman Chonacky to discuss Fortran as a "language for high-performance computing." Because many computer scientists are baffled as to why Fortran is still in common use in scientific computing, we decided to take advantage of this invitation to explain why this is so.
I do most of my code development on dual-core and quad-core Macintosh PCs. 1 I have four different f95 compilers and two MPI libraries installed on a small cluster under my desk. Two compilers are freely available (g95 and gfortran)—I use g95, and I've also purchased four commercial compilers. Compilers made by chip manufacturers (such as IBM and Intel) usually produce faster code, but every compiler has its strengths and weaknesses; thus, I run my code suite through all of them periodically to make sure no problems appear. Bugs in Fortran95 compilers are pretty rare these days.
I use Fortran for the numerically intensive parts of the calculations. I typically use C when I need to invoke the operating system—for example, I use C to manage threads and sockets, and I write Fortran77 wrappers for such codes. The scientific code (particle simulations of plasmas) typically runs for a long time in batch mode and doesn't itself have a user interface. UCLA's Academic Technology Services is developing a Web user interface, based loosely on Rappture ( https://developer.nanohub.org/projects/rappture), to create application inputs for such batch jobs. Commercial packages such as IDL ( www.ittvis.com/idl) are typically used to postprocess the results.
As languages get more sophisticated, programmers lose more control over their programs, which is both a blessing and a curse. A program's logic can be very hard to trace when problems occur because choices are hidden and implicit, and performance might be difficult to improve for similar reasons. It's very important for scientists that their programs be correct—their careers depend on it—and maintaining control versus ease of use is a constant struggle. However, I don't think there's such a thing as a perfect or ideal language for general use: different languages are better at different things. I would like to see computer scientists work on seamless integration of different components written by different people in different languages. The Web services revolution has interlanguage operability as one of its aims—but via a Web server, which introduces an extra layer of complexity.