Agile software development has gained an incredible amount of momentum, to the point that "agile" is considered to be what "good" developers follow, while formal methods that support the CMMI are associated with all that is "bad" in software development. The term "agile" has a lot to do with it, as anything that is not agile can be assumed to be some sort of antonym to agile, such as plodding, bloated, slow, cumbersome, and so forth. The resulting dichotomy has grown to the point that agile is mistakenly perceived as laissez-faire, or "do what you want to do" software development, and anything else is automatically bad because it is not agile. In particular, many developers perceive the adoption of agile software development to mean that they do not have to devote any effort to the actual design of the software. Everything is done with iterations and prototypes between the actual users and the developers. This is nice, but it doesn't work.
It doesn't work because end-users don't do well watching a demonstration of some hacked up prototype that only the developer can make work, and even then the demonstration is interspersed with pauses while the developer goes back and tweaks some code right in front of everyone so that the demonstration can proceed. In the end, the users leave with a blurry impression of the prototype mixed in with examples of how the development environment works. This works even less well for those end-users that were unable to make it to the demonstration in the first place. In the end, the end-user community has only a vague notion of what is actually going to be implemented. I have seen this happen over and over again in the name of agile software development. It doesn't work, and it's not the way agile is supposed to be.
Currently, it appears that four major methodologies have emerged as predominant under the agile umbrella. Let's take a look at each one to see whether they support the concept of a design effort:
Crystal Clear: Alistair Cockburn
The author specifically describes a release plan, use cases, design sketches, and a common object model. These are used to develop actual code, test cases, and a user manual among other things. There are a lot of books and a huge number of third-party tools available to help all of this modeling and sketching before any actual coding is done.
Extreme Programming: Kent Beck, et al.
Extreme stresses the use of a "simple design" as opposed to an anticipatory design. The objective is to keep the design simple, and add flexibility through re-factoring only when and where necessary. For example, if a confirmation dialog is displayed only once in the application, it makes sense to hardcode the design as well as the implementation. Later, when it becomes necessary to sprinkle confirmation dialogs throughout the application, it makes sense to re-factor the design to include a utility confirmation dialog that accepts parameters to influence its behavior. Either way, the re-factoring happens at the design level before it happens at the code level.
Feature-Driven Development: Jeff DeLuca, Peter Coad
FDD specifically describes a hierarchy starting with a model, composed of features. Then come feature designs followed by feature builds. FDD stresses planning by feature as well as designing by feature.
Scrum: Ken Schwaber
Scrum focuses on the concept of a product backlog, which is a list of all business and technology features envisioned for the product. The product backlog is a catchall place for everything that is or will be incorporated into the application. Out of this comes in a release backlog, which is a selection of features that should be in the next product release. Finally, a sprint backlog is created consisting of features to be implemented in the current (30 day) sprint, along with tasks required to implement each feature. Interestingly, a "feature" can be anything from a complete object model to a data entry user interface layout to a formatting and presentation mask (such as "Money").
While almost everyone is advocating design, almost no one advocates formal documentation. For example, extreme programming recommends most of the design be held in CRC cards, with unit tests and acceptance tests acting as substitutes for additional documentation. Everywhere I look, people are avoiding documentation like the plague. This probably has something to do with the old IEEE standards that specified huge volumes of documentation that never worked, because people don't read big books.
At the same time, I don't see a whole lot of actual unit testing and defined acceptance testing going on, even among teams that promote themselves as agile teams. The majority seem to be using agile as an excuse to just do prototypes, and everyone is surprised that the success rate in software development remains low even after more than a decade of agile adoption. I believe that agile methodologies are in danger of falling victim to the same "it's complicated and it doesn't work" meme that triggered the agile revolution in the first place.
So how do we handle this? I see some clues on the horizon. First, the bad:
- Enough empirical studies have been published to make a strong case that formal documentation (when done right, and used as a controlling artifact) dramatically increases the probability of success in software engineering efforts.
- At the detail level, when new laws are researched and framed, it is the congressional staffers that do all the work.
- Congressional staffers rely on academic researchers and research when exploring a new topic. Peer review is a powerful thing.
- There is bound to be a major software failure in the future that results in enough damage that Congress feels it has to take action to "fix" the problem of poor software quality in general.
- When that happens, CMMI and COBIT will win.
Now, the good:
- We know a lot more about knowledge acquisition and knowledge transfer now, and knowledge transfer between the end-users and developers is a major key to success in software development.
- Few people read big books anymore, but navigating through a giant Web site has become a natural skill.
- Multimedia is now easy to do, whether it is using a camera phone to take a snapshot of a white board for posting on a Web site or using a CASE tool to automatically generate a complete data model (diagrams and dictionary) as a hyperlinked set of Web pages.
- Web sites are easy to develop and update, while controlling who is working with what version of a document is a significant problem.
In short, daddy's documentation didn't work, lousy documentation doesn't work either, but that's okay; we have the ability to represent knowledge in a form that actually seems to work. So how should we go about it? Sounds like we need to use an agile specification. This will be my focus for the next few months.