The Community for Technology Leaders
RSS Icon
Subscribe

Letters

(HTML)
Issue No.01 - January/February (2005 vol.22)
pp: 8-9
Published by the IEEE Computer Society
To Model or Not to Model
Having read Brett Holt's letter in the July/Aug. issue (supporting Robert Glass's column "On Modeling and Discomfort" in the Mar./Apr. issue), I feel compelled to respond.
In an ideal world, all software designs would be modeled and their implementation driven from those models. This approach's holy grail is the ability to automatically and perfectly generate the implementation (source code) from the model. This represents the next evolutionary jump in software programming—a further abstraction of complexity (we had binary code, then assembly language, then functional languages, then object-oriented languages).
However, we don't live in an ideal world, and taking this approach isn't always suitable. Significant overhead is associated with modeling the design, a minimum amount of modeling is required irrespective of the development's size, and the skill level required to generate and understand modeling is considerably higher than to write source code. It's always important to select a development approach that best suits the task and the resources available—there's no silver bullet.




Model-driven approaches to development are best suited to more complex projects (systems) and those with significant reliability, security, and maintainability issues. Many developers who criticize model-driven approaches have never had to develop complex, industrial-strength systems or applications. Furthermore, many developers simply don't have the skills or ability to create or understand models effectively and hence are apprehensive of this approach.
Brett Holt's comment—that his software manager's attempt to create sufficiently detailed UML shifted the emphasis from modeling the design to modeling the implementation—is valid. His manager had unrealistic expectations given the process, tools, and resources he had available. UML is simply a language and doesn't provide a process for creating models or translating them into code. It also has serious drawbacks for automatic code generation, as it provides far too many modeling options and no predefined standard for specifying processes.
People have been automatically generating code from models successfully for over a decade, especially in the automotive and telecommunications industries. One of the most successful approaches involves using the Shlaer-Mellor modeling method, augmented with the Action Specification Language, and having an architecture or translation domain that defines how to translate the models and ASL into code. However, this requires an absolutely rigorous approach to modeling and carries the significant overhead of creating and maintaining an enterprise-wide architecture domain (translator). So, it's really only suitable for complex, long-lived systems developed by highly process-driven organizations. For those interested in investigating this approach further, one of the best advocates is Kennedy Carter, a UK-based consultancy that's adapted the approach to use UML semantics ( www.kc.com).
David Allen
Head of new product development
Sequoia Voting Systems
dave.allen@uk.delarue.com
Real-world Software Development
I have just read Robert Glass's column "Some Heresy Regarding Software Engineering," (July/Aug. 2004). I suggest that the real problem is in determining the software system's requirements. With traditional software engineering, you must determine all the system's requirements up front. Of course, this is often difficult, so most developers rely on their (and their teammates') expertise to figure out the requirements as they progress through the project's design, implementation, and even test phases.
In many respects, I believe this is what the agile and Extreme movements are saying and what's happening in the Web projects that you cite. It's up to the developers to discover the requirements, presumably on the basis of some vague initial brief and a lot of iterations with the client. However, this is simply a fudge to let developers do what they love, which is to program, rather than making them figure out what the client wants.
Some years ago, we tried a process improvement exercise, and like the experience you mentioned in your column, it (mostly) failed. It failed largely because most of our engineers are programmers, and they get their kicks by seeing their software bring a piece of hardware to life. Analyzing requirements on white boards and documents is, to them, both tedious and uncreative for the simple reason that it's not alive in the same way that executing software is.
For some projects, letting the programmers figure out most of the required behavior might be best. However, because they haven't actively set out to discover all the requirements and to ensure that they're consistent, the end result might be flawed. This is evident in Web sites that have broken links, poor usability, dysfunctional forms, or defective mid-tier logic. It's also true for many PC programs.
Therefore, I suggest that any software engineering process's primary aspect must be to capture, understand, and deliver a product's requirements. How the engineers achieve this is immaterial. Civil engineers normally have a single opportunity to build something and therefore must expend significant effort up front on analysis and design. This is their discipline; why should software engineers be any different?
Part of the answer might lie in the risks involved. After all, a defective civil engineering project could be dangerous and costly, whereas a Web site can hardly be so. But here lies another problem with software engineering: the limited process for understanding and mitigating risks. A defective Web site might cost the company it represents a lot in lost sales revenues, which in today's Internet world is risky to a company's bottom line.
What we really need are better techniques for modeling requirements that enable such models to come alive. Of course, the effort required to model the requirements must pay off in terms of eliminating code thrashing and improving both productivity and quality. This is why I believe that the MDA movement has a lot of promise, if only its techniques become accessible and affordable.
Dave Banham
Senior engineer
Areva T&D UK Ltd.
dave.banham@iee.org.uk
Software isn't a goal in its own right; it's a means to an end, a medium. As with any tool, the software's purpose defines the constraints under which it's engineered. So, as Robert Glass says in his July/Aug. Loyal Opposition column, it's wrong to imagine that you can or should develop all software under the same constraints.
Engineering training, from my experience, tries to teach "mainstream" methodology to arm students for future exploration. I don't feel (even from my own university courses some 15 years ago) that anyone really believes there's one way to produce a body of software, or even that the mainstream concept is anything other than a useful fiction analogous to the orbiting-planets model of an atom.
Being a teacher, you have a view of university teaching that I don't. It might well be the case that university teaching fails to communicate the point that taught methodologies are approximations subject to reality. This is, as you highlight, obvious to anyone who gives the field a moment's reflective thought, although experience would have to inform that reflection—experience that your students don't yet have.
The best gift that you can give your students is the clear knowledge that they must start somewhere and that textbooks at least offer well-understood and well-documented information, even if not a realistic picture of what actually happens.
Certain constraints should apply to the exercise in complexity management that producing software systems turns out to be, and those are what you want your students to master before they move on.
Nathan Sowatskey
Technical leader
NMTG CTO Engineering
nsowatsk@cisco.com
Input Checking a Better Approach
In "Fail Fast" (Design, Sept./Oct. 2004), Jim Shore's main point was very well taken. Assertions are indeed a powerful mechanism, but the subsequent misuse of assertions as an input-checking mechanism was quite disappointing.
A far better approach is Bertrand Meyer's comprehensive, structured Design by Contract discipline, which applies assertions and the business-subcontracting metaphor to the specification, documentation, and optional verification of correct software interfaces. Interested programmers can learn all the ways in which DBC is superior to GIGO (garbage in, garbage out), defensive programming, ad hoc assertions, and proofs and its relation to robustness, testing, efficiency, and other qualities at www.cs.unc.edu/~smithja/MIMS/DataModel/research/DBC.html.
Training and free software are available to apply DBC to varying degrees in several popular languages, including C, C++, Fortran-90, Java, and Python. Web resources include http://designbycontract.com, http://sourceforge.net/projects/icplus, and www.wayforward.net/pycontract.
Todd Plessel, CSDP
Visualization specialist
plessel@computer.org
34 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool