Web Development

Archive for the ‘Practices’ Category

[Note: treat ‘days’ as ‘months’ here. This was drafted long back] 

Few days back, when we were still mulling over improvising over a manual work-flow in our system, I had some discussions with my Product Manager collecting the details about the existing work-flow and desired improvements. Plain English communication. Product manager, himself being one of the regular user of the system came up with some simple steps towards the goal of easing things. Previous two nights I had been reading “Writing Effective Use Cases” by Alistair Cockburn. A real good book on the subject. Starts with real, easy to understand examples. Good to the extent that these examples are almost sufficient for most projects. So, after conversing with the Product Manager, I sat down to take a note of work-flow that he talked of. Just to keep a track. While writing down the points I could easily relate what I was doing to the usecases described by Alistair. Some steps towards a particular goal from the user’s perspective. Naturally, I realized to divide the work-flow into a set of usecases. Usecases, Ifeel, are clear and concise way of recording the requirements. Advantage being that their structure doesn’t impede our regular thought process on a subject. The structure rhymes very well with how we think about requirements. Following are few immediate realizations I had about the usability of usecases.

  1. Nimble structure results in consistant communication of requirements. Usually, requirements documents either have an overwhelming structure or no structure at all. Both detrimental to a project. I’ve seen long SRSs subdivided into numerous sections each talking about the same goal in a separate manner. I had the *privilege* of working on a project that had some 15-20 pages of verbose SRS at the beginning and six months down the line the SRS remained unchanged and we had a wonderful excel sheet and mantis bug tracker with hidden, changed requirements in them!. Requirements management fiasco? Yeah, nothing better than that. I Also, wonder how can an SRS be made exhaustive in the incipient stages of a project. Are people so smart to analyze all details initially? More importantly, can such an overwhelming structure be produced collectively? or is there a person or two who work on the document in isolation, get it verified with stakeholders, modifying etc? Also, does a hugely structured document offer itself as a mindmap? Not having structure at all is even bad. Requirements communicated through emails and emails only!! No mindmap, No tracking of changes in requirements.
  2. Usecases are simple enough to be put down collaboratively over a project meeting (also casual conversations with the stakeholders). Since we think of scenarios we think of alternatives. Since we put down scenarios on paper others are able to come up with alternatives to those scenarios or possible reuse of usecases. In short usecases provide wings to our analytical self as they relate easily to our thought process.
  3. Easy to estimate and communicate estimates: Estimation is a different subject all together. However, the way we represent Requirements affects the quality of estimation. Assuming an organization that doesn’t have historical estimation data that helps the formal methods of estimation we would want to make the estimation process as intuitive as possible. In such scenarios long, verbose SRS hardly helps us to estimate properly. We have paragraphs of functionality required but not the ‘units’ of functionality required. It is easy to estimate (and also prioritize) in units. With usecases we have a well defined and agreed upon chunk of tasks with alternatives jotted down and failure cases and risks jotted down. So estimating becomes intuitive.
  4. Reduced complexity in eliciting requirements: One inherent feature in usecases is that they speak of one and only one objective from the user’s perspective. The virtual user of the usecase is not multitasking. We have pre-conditions in usecases that specify what should be true for the following scenario to take place. Hence, we are separating the scenario from noise (not ignoring the noise but taking it as a different unit).
  5. Post-conditions help us make sure that the desired state of the system is maintained after the scenario has occurred.

I would be grateful to be enlightened by your experiences with usecases and your views on the same.

Advertisements

Many a times it happens that the code we are maintaining/enhancing is not well structured, not comprehensible, doesn’t use sensible coding practices/guidelines. We should sense a danger when we are not in a position to guarantee that a particular change/fix doesn’t produce any side-effects. Currently, I am managing a code base which has such characteristics. Whenever my manager asks “Is this issue fixed”, I can’t think of a better reply than “Looks like it is fixed”. And not surprisingly, something or other is broken. I read an appropriate analogy to this situation (don’t remember where). It says “Hit a table in Tokyo and a building falls in New York”.

In such cases I feel, we should keep the current system running as it is, branch it out as old_projectname in version control system and start afresh. While you are fixing things on the branch, record them to fix on the new development too. But the most important question is whether the project management is willing to plan for this clean up. Mostly it is not. Because, until the issue is properly communicated to the managers they won’t feel the need for it. And, with strict deadlines and expectations in place it is an herculean task to abstract all problems in manageability and present it to get a buy in for this clean up. It would demand a critical analysis of code to filter out the hidden issues that surface to influence the reputation of a worthy developer. Yes, finally it is the developer who is blamed for bugs and side-effects and not the current state of code.

I feel, this process should occur in small steps at a time. Re-factoring. First build tests for the existing code. Then make one small change at a time. In that way we don’t miss any of the capabilities of existing system.

It is high time managers step into the shoes of developers and understand the importance of maintainability (not for the sake of developers but for the sake of quality, goodwill and progress)

This morning I got an email from Joe, Director of Agile methods and practices in my org. saying that he wants me to co-present a talk in a view to evangelize our development teams regarding CI. I agreed. I am very much impressed by what CI promises, immediate feedback, better quality etc. I don’t have a hands-on experience on CI because the shops I’ve worked in till now were least concerned about best practices. Best practices are pushed enough down in the priority that they are never visible. They were considered as non-revenue generating (forgetting about the value and goodwill that these practices generate).

I put some of my thoughts here.

As I said earlier I haven’t worked on CI earlier except for some reading. I was under the impression that CI requires a server like cruise control until Joe pointed out that it is not necessary. CI is more a practice than a technology. It is a practice wherein we use technologies like configuration management, build tools, testing tools etc. to ensure that bugs during integration are detected sooner than later when it is visible to the end-user or has increased in scope, it disciplines the process of integration by asking developers to build, integrate and run tests as frequently (or better, as atomically) as possible.

Next question that came up to my mind was whether it is mandatory to have automated tests for the benefits of CI to surface. This question raised because it is a fact that in most software shops developers are not used to writing automated tests, neither is there any efforts in that direction. So having automated tests as unavoidable component of CI will drive developers away from the same due to the increased learning curve. After all, writing good tests is not just learning a xUnit framework, it has a whole psychology in itself. Joe replied saying that CI can occur in the absence of automated tests but only a fraction of its benefits will be perceived. There would be a manual testing on integration server that would reveal problems but not as immediately and as exhaustively as well-written automated tests would.

Few days back there was a discussion wherein I put CI as one of the areas of focus and introduced the practice to the audience. One gentleman expressed that CI seems to be good to apply on ‘big’ projects rather than ‘small’ ones. Well, he left to us to think for ourselves what is ‘big’ and what is ‘small’, we didn’t mind doing that! But for someone who hasn’t experienced integration problems (or rather wasn’t sensitive enough to record integration problems), including me, such a question is obvious. What I feel is that it is good to adopt CI irrespective of project size and number of developers working on it. After all Quality is not dependent on those factors either.

The project that I am working on now has the following process:

  • Checkout code from repository
  • Modify/add
  • Verify the changes against the repository status of the scripts
  • If everything is fine, check in the scripts
  • Whenever it is time for a demo, get the latest from CVS, go through the application to find any bugs.
  • Fix the bugs found and update the repository
  • Get the latest code from the repository onto the demo server and present it.

Sounds dangerous and haphazard! but a reality. My current focus is to improve this process bit by bit. Your suggestions are most welcome.