Web Development

Archive for August 2007

Many a times it happens that the code we are maintaining/enhancing is not well structured, not comprehensible, doesn’t use sensible coding practices/guidelines. We should sense a danger when we are not in a position to guarantee that a particular change/fix doesn’t produce any side-effects. Currently, I am managing a code base which has such characteristics. Whenever my manager asks “Is this issue fixed”, I can’t think of a better reply than “Looks like it is fixed”. And not surprisingly, something or other is broken. I read an appropriate analogy to this situation (don’t remember where). It says “Hit a table in Tokyo and a building falls in New York”.

In such cases I feel, we should keep the current system running as it is, branch it out as old_projectname in version control system and start afresh. While you are fixing things on the branch, record them to fix on the new development too. But the most important question is whether the project management is willing to plan for this clean up. Mostly it is not. Because, until the issue is properly communicated to the managers they won’t feel the need for it. And, with strict deadlines and expectations in place it is an herculean task to abstract all problems in manageability and present it to get a buy in for this clean up. It would demand a critical analysis of code to filter out the hidden issues that surface to influence the reputation of a worthy developer. Yes, finally it is the developer who is blamed for bugs and side-effects and not the current state of code.

I feel, this process should occur in small steps at a time. Re-factoring. First build tests for the existing code. Then make one small change at a time. In that way we don’t miss any of the capabilities of existing system.

It is high time managers step into the shoes of developers and understand the importance of maintainability (not for the sake of developers but for the sake of quality, goodwill and progress)


This morning I got an email from Joe, Director of Agile methods and practices in my org. saying that he wants me to co-present a talk in a view to evangelize our development teams regarding CI. I agreed. I am very much impressed by what CI promises, immediate feedback, better quality etc. I don’t have a hands-on experience on CI because the shops I’ve worked in till now were least concerned about best practices. Best practices are pushed enough down in the priority that they are never visible. They were considered as non-revenue generating (forgetting about the value and goodwill that these practices generate).

I put some of my thoughts here.

As I said earlier I haven’t worked on CI earlier except for some reading. I was under the impression that CI requires a server like cruise control until Joe pointed out that it is not necessary. CI is more a practice than a technology. It is a practice wherein we use technologies like configuration management, build tools, testing tools etc. to ensure that bugs during integration are detected sooner than later when it is visible to the end-user or has increased in scope, it disciplines the process of integration by asking developers to build, integrate and run tests as frequently (or better, as atomically) as possible.

Next question that came up to my mind was whether it is mandatory to have automated tests for the benefits of CI to surface. This question raised because it is a fact that in most software shops developers are not used to writing automated tests, neither is there any efforts in that direction. So having automated tests as unavoidable component of CI will drive developers away from the same due to the increased learning curve. After all, writing good tests is not just learning a xUnit framework, it has a whole psychology in itself. Joe replied saying that CI can occur in the absence of automated tests but only a fraction of its benefits will be perceived. There would be a manual testing on integration server that would reveal problems but not as immediately and as exhaustively as well-written automated tests would.

Few days back there was a discussion wherein I put CI as one of the areas of focus and introduced the practice to the audience. One gentleman expressed that CI seems to be good to apply on ‘big’ projects rather than ‘small’ ones. Well, he left to us to think for ourselves what is ‘big’ and what is ‘small’, we didn’t mind doing that! But for someone who hasn’t experienced integration problems (or rather wasn’t sensitive enough to record integration problems), including me, such a question is obvious. What I feel is that it is good to adopt CI irrespective of project size and number of developers working on it. After all Quality is not dependent on those factors either.

The project that I am working on now has the following process:

  • Checkout code from repository
  • Modify/add
  • Verify the changes against the repository status of the scripts
  • If everything is fine, check in the scripts
  • Whenever it is time for a demo, get the latest from CVS, go through the application to find any bugs.
  • Fix the bugs found and update the repository
  • Get the latest code from the repository onto the demo server and present it.

Sounds dangerous and haphazard! but a reality. My current focus is to improve this process bit by bit. Your suggestions are most welcome.