The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.05 - September/October (2005 vol.7)
pp: 2
Published by the IEEE Computer Society
ABSTRACT
I spent a pleasant afternoon at the Conference on Lasers and Electro-Optics/Quantum Electronics (CLEO/QUELs) in Baltimore this June. Having only a relatively short time to spend, I decided to try a quick experiment-cruise the aisles of the poster session, scan the presentations, and get a sense of how deeply computing practices have permeated the conduct of experimental optics research and development
I spent a pleasant afternoon at the Conference on Lasers and Electro-Optics/Quantum Electronics (CLEO/QUELS) in Baltimore this June. Having only a relatively short time to spend, I decided to try a quick experiment—cruise the aisles of the poster session, scan the a sense of how deeply computing practices have permeated experimental optics R&D.
I found scant mention of computation in these reports; I can't recall whether my first reaction to this revelation was surprise or disappointment. Instead, I tried to determine why only two of these scores of reports indicated that computation had played a sufficiently important role in their discoveries that they merited discussion. My frame of reference for thinking about this puzzle lies partially in my history with experimental optics. From 1982 to 1987, I worked on an effort to engineer instruments for measuring optical propagation and studying the effects of the atmosphere on propagating light. The connection between these instruments' performance characteristics and the interpretation of their output data for application to adaptive optics requires computational modeling.
The other part of my frame of reference is the progress this last generation has made in computing hardware and software since my optical research work. In the past, we were forced to use simulation data that needed to be produced by a specialist who understood high-performance computing and its associated simulation codes. We would then patch those data as input into programs that we had patiently handcrafted in Pascal to emulate instrument performance. Finally, we would give selected output streams to someone who was good at using graphing tools to display the results for comparison to our experimental results. Today, an individual experimentalist can use a desktop with code packages and productivity tools to perform elaborate, integrated computations that accomplish the same task completely.
This confluence of several sophisticated tasks into one person's capabilities, even in the absence of formal programming skills, greatly expands the ability of ordinary scientists and engineers to add computational science to their quiver of competencies. In principal, this should result in an obvious presence of computational methods in experimental work and, we would expect, greater visibility of such methods in the description of experimental results. I can readily see how I would do things differently today than we did 20 years ago.
Why, then, the lacuna that I observed in my experiment at CLEO? One obvious hypothesis is that computational components of research work are simply tacit. They don't deserve mention in research reports any more than data averaging or validation do. An alternative hypothesis is that practice hasn't caught up with capability. If you have any thoughts, drop me a line. Right now, I have few clues, but the lack of evidence suggests plenty of "headroom" for the integration of computational methods into experimental practice.
17 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool