Agile Careers


Jim ("Cope") Coplien is an old C++ shark who now integrates the technological and human sides of the software business as an author, coach, trainer, and executive consultant. He is one of the founders of the software pattern discipline, and his organizational patterns work is one of the foundations of both Scrum and XP. He currently works for Gertrud & Cope, is based in Denmark, and is a partner in the Scrum Foundation. He has authored or co-authored many books, including the recently released Wiley title, Lean Architecture for Agile Software Development. When he grows up, he wants to be an anthropologist. 

Log in to register a comment.

Subscribe here.

Blogs Blogs
« Back


Certification aspires to be a societal recognition of some level of professional competence. It’s a measure neither of performance nor ability: a shortcut that claims to predict long-term professional competence. While most certification requires training, it is usually granted on the basis of a few minutes of testing that purport to predict a career of behavior.

While we hold the stereotype that certification indicates an elite level of performance, conveyed by those holding the highest standards in a given field, it’s difficult to find anything that indicates why this should be so. That is, there is little culture of meta-certification: of certifying those who establish certification norms. Its goal is always to separate the included from the excluded, but there is often little evidence that it separates the capable from the desirous.

It isn’t the individuals involved who are the issue, but rather the institution of certification itself. Certification is a trust substitute. I can trust that a proxy certifying agency has established trust on my behalf rather than requiring that I take that bothersome time to do so.

Too often, these norms are set by people who have more of an interest than any imbued right or moral imperative to establish them. They need only to avoid the extremes that would attract controversy, and rarely invoke any empirical grounding. That this is largely so should raise our suspicions. Standardization efforts rarely tap into broad practice or the representation of broad mores, because the people who develop standards almost always take the mantle on themselves instead of being thrust into the position by a supportive constituency. That in itself doesn’t imply that standards-makers are inherently evil or opportunistic. I have known many of these people and they are doing their best to do a good thing.

At best, certification sets the bar at a level sufficiently above blind stumbling to gain the label of competence. But success in today’s engineering and software worlds depends on being able to handle complex situations that defy teachable technique. Any “skill” that can be evaluated with a test is suspect: as highlighted in Is there a doctor in the house, we know that test scores are poor indicators of future performance. Most certification tests knowledge — not practice.

Certification makes sense at the level of agreed, objective standards with a foundation in absolute axioms. Lawyers are certified by the bar on the basis of understanding arbitrary rules of conduct called ethics. Lawyers must be ethical: they need not be moral. Certification comes from being able to mechanically adhere to codified workflows of court procedure and client engagement — not for the aptitude to follow them. The same is true for most technology certifications, whose exams largely test knowledge at the very lowest layers of the Bloom taxonomy. So though Scrum practice has moral and business implications, ScrumMaster certification is largely about level-1 Bloom knowledge (regurgitation) of elements of the Scrum framework and of technique.

Few certifying entities enforce the practices covered in the evaluation instrument. Maybe we presume that the certified should police themselves. If that’s the case, it’s difficult to argue the value of certification. Agile principles are rooted in continuous feedback — not a one-time assessment of one’s direction as in waterfall, but through ongoing introspection. Certification should be a process rather than an event or even a series of events. Even better is a life process of learning and of Kaizen mind. Certification pretends to establish the trust that can launch one’s career in some well-defined profession, but at best it only prepares one for a job.

Disclosure: IEEE Computer Society is a provider of two certifications for software developers--the CSDA and CSDP.


Trackback URL:

Jim makes some good points, but he overlooks aspects of the broader contexts where certifications can be very valuable.

Yes, in many fields (particularly ours) the most qualified tend not to be certified, and there is no evidence that certification corresponds to long-term career performance, especially since the super-competent end up learning broadly by themselves.

Yes, there are poor certifications that are narrowly focused, unvalidated, and not very challenging. But there are others which are broadly focused, are validated at least for internal consistency, and require extensive study to pass.

Yes, certification exams cannot assess ability to practice. But it can trigger people to start practicing the knowledge they have learned (see below), and in general those competent in practice will have a much easier time studying and passing the certification. I see a very good correlation in university exams between scores for multiple-choice questions, and scores for open-ended design questions.

Yes, continual improvement (Kaizen mind) is key. Good certifications, such as CSDP, require ongoing recertification.

Certification, like any examination process can be extremely valuable for two things:

1) Weeding out the weak. Those without the background knowledge simply will be able to pass a good quality exam.

2) Ensuring people have a broader knowledge, and know the critical ‘edge cases’. You can be an excellent driver, but a certification in ‘defensive driving’ could save you in time of peril.

Take programming: Somebody can become a super-competent programmer in a narrow language and domain. But a certification such as CSDP will force them to become aware of broader software engineering topics such as safety, modeling, productive processes, usability, mathematical rigour, etc. There is evidence (including my own research) that employers really need people with more general software engineering skills that most programmers simply do not pick up without some structured curriculum. Following a certification curriculum raises people’s awareness of the benefits of the knowledge in that curriculum, and encourages practitioners to practice the techniques. There is no guarantee that people will competently apply the knowledge, but there is a guarantee that if people do not know the knowledge, then they will not apply it. I myself can reflect on my career: I have learned many things by myself, but a considerable fraction of my learning has only been triggered by the need to take (or, in recent decades to teach) a course with an exam. My study of modeling, statistics and usability have all been as a result of this: Most people are skeptical of the value of each of these until they are in some way forced to study the principles (e.g. for an exam or certification). Then many have a eureka moment and start practicing the knowledge.

When hiring, I look for all sorts of evidence of competence and potential: Artifacts that are evidence of good practice (programs, writing); evidence that the candidate has had the initiative to learn in depth about me and what I do, and maybe preemptively contribute to my open source projects. If someone has a good quality certification in addition to the above, then it would add to my confidence. But, yes, if somebody lists a bunch of narrow certifications and show neither artifacts nor initiative, then the certifications can count against them as ‘padding’.

Posted on 1/6/13 7:43 AM.

Tim — thanks for jumping in the water here.

To take one tack on your argument: the problem with most approaches to safety, rigour, and usability is that they are arcane. I would certify well against any measure of UX tools and technique (GOMS, etc.) but that doesn't mean I have the experience to design a good interface (I certainly don't). Coming from telecom, I know that one of our first jobs with new hires was to get them to unlearn much of what they had been taught about good software engineering practice. I believe that none of these certifications can avoid context sensitivity and that, therefore, by definition, they cannot be broad.

Regarding your experience: Internal consistency of an exam does not bode either for internal or external validity. The evidence indicates that certification does not, in North American culture, lead people to challenge themselves to improve on what they misunderstood. (Curiously, in Japan, it does. See any of the literature on the Dunning-Kruger effect or, in particular, "Divergent consequences of success and failure in Japan and North America: An investigation of self-improving motivations and malleable selves," Journal of Personality and Social Psychology 81(4), Heine et al., October 2001)

If you could present some real numbers that substantiate your claims, I'd love to see them. Most of the research I've looked at goes counter to your claims. See "Project Management Certification does Not Predict Performance" (­44660); "The Relationship Between College Grades and Adult Achievement" (; or "Tests Tell Us Little About Talent", Michael A. Wallach, American Scientist 64(1), January-February 1976. Wallach says: “Test scores and grades are not indices of merit in their own right; they are thought to provide a shorthand indication of a student’s competencies in the world outside testing... Recent research on the nature of talent indicates, however, that ... is false ... for the upper part of the range... More reliable answers ... by assessing what the subject does in a sample of the treatment situation itself... [The test] will tell you about the person’s response tendencies in situations that resemble the test rather than in situations that resemble the criterion.”

I would challenge you to respond to these perspectives and these research results. But maybe these are nits. I'm certainly with you with regard to the perspectives of your closing paragraph.

Posted on 1/6/13 5:59 PM in reply to Timothy Lethbridge.