Thursday, November 22, 2007

A better understanding of the Capability Maturity Model

Having been a programmer for the first few years of my career - I moved to quality assurance some years ago to get a better understanding of processes around building quality software products. My first assignment was to work with an external consultant and ensure that a 500+ member organization was assessed to be at Level 3 of the Capability Maturity Model (CMM). As a programmer my typical lifecycle was to get requirements over the phone, come up with my own design in my head, code it, unit test it and move it into system and integration testing. There was not much planning or process to the whole thing - just churn out as much as you could in the shortest amount of time and system and integration testing done by the client would tell you all your defects. So when I entered the world of quality assurance it was a whole new world.

One of the first things I learnt was the necessity of having a Quality Control (QC) Function and a Peer Review Activity to validate the fitness of the product being built but what I never figured out was why was the QC Function part of Level 2 and Peer Reviews part of Level 3. As part of that organization we really did not go through being assessed at level 2 before we went for level 3. We implemented all the process like Level 2 and Level 3 were all compressed into one big Level. So both the functions got implemented together and we found it very hard to actually get both working seamlessly.

Subsequent to the successful implementation of CMM for that organization I moved on to start a team of my own. We started to build products but they were getting killed at the User Acceptance Testing phase. So we built a sound requirements management practice and a testing (QC) practice and we ensured that what we delivered met the expectations of the clients. And these two practices happen to be Level 2 practices. Once we had this working very well we realised after that is that this is only one aspect of quality in software and that is the functional aspect of it. There was a completely different aspect and that was the quality of code and the QC practice was not able to address this need. It is at this point of time we started looking at peer reviews and ensuring that the code met certain benchmarks. We are now seeing our testing cycles come down and the code starting to perform better and thus improving the overall level of quality.

Another example was initially we did not have much historical data or experience in giving estimates or managing the project within the estimates and it was not unusual to go over the estimate many times over (Level 1). This forced us to come up with better estimation techniques and project management practices for individual projects (Level 2). Having done this for a couple of years we are now trying to consolidate all the good practices that we have gathered over time into a single place to implement in future projects so as to standardise the way we do things and to make it easy to implement projects (Level 3).

I am now realising that even though we are not implementing the CMM model we can actually relate our experiences to the various levels and the practices within the levels. It is very important to let one practice become institutionalized before the next practice is put in place and over time all of this comes very naturally. So I don't think agendas of a lot of organizations to achieve a certain level in short pre defined intervals of time is sustainable to actually see the benefits of it. It is important to allow the practice to develop and mature to become useful. The practitioners need to be enlightened about the usefulness of a practice and enlightenment takes time and effort - it does not happen over a training. But then will the economics behind getting CMM or CMM(I) assessed allow things to mature and take its time - I don't think so. So anyone wanting to get assessed to become CMM (I) should ask themselves are we getting assessed for marketing reasons or to actually improve things.

No comments: