Web Development

One fact that many JavaScript developers don’t realize is that there are only two levels of scope in JavaScript – the global scope and the function scope. Usually we find code like:

function doSomething(){
    a = 1;
}

Here we are not defining a variable called ‘a’ we are just assigning a value to a global variable with the same name. Such statements inside function scope should be avoided for simple reason that global variables result in complicated data flow. The right way to do is:

function doSomething(){
    var a = 1;
}

The point is that although JavaScript is not strict about variable declaration, ‘a=1’ is not a convenient alternative for ‘var a=1’.

Advertisements

Recently I had submitted my CSS for a review. One of the feedback suggested that I get rid of multiple ID attributes in a rule. I had several selectors as below in my CSS.

#grandparent #parent #child{/* property values */}

Since IDs are unique in a document it makes little sense to use ID attributes in a descendant selector. Well, sounds true. When something is unique why would you need to qualify it with something else. I was wondering why would the CSS specification asks to add up the number of id attributes to decide the specificity of a rule.There is one use that I can think of. Consider an application wherein you serve a default theme to the user (Not necessarily an end-user. Could be some other application). The user can customize your default theme. And to add to it the customization should be source order agnostic, it takes precedence irrespective of whether it appears before or after the default theme.One strategy to achieve this is to design your default theme with moderate specificity. For e.g.

#child{}

Now the user can add more id attributes to the rule to increase the precedence of the custom stylesheet. 

#parent #child{}

Above rule takes precedence irrespective of whether it is above or below the default stylesheet.Well, this could be achieved in different ways. We can increase specificity in several ways but this seems to be the safest/surest way (barring ‘style’ and ‘!important’) for such scenarios mainly because id attribute values are unique. So the users can override the rule more precisely without side-effects. 

Tags:

We have a layer, say an unordered list with links in it. We want to hide/show the layer. We can hide the layer by attaching a handler to mouse out event of the layer. However this doesn’t work as expected. Because of event bubbling and the way browsers implement this event, mouse out event is fired whenever mouse moves from the layer to a link within it (even though the mouse pointer is still inside the layer). This might cause flickering of the layer based on the implementation. One way out of this issue is to check on mouse out if the ‘related target’ element is the child of layer. If yes, do nothing else hide the layer. Related target is the element to which the mouse moves. None of the links within the ‘ul’ will qualify for the mouse out event handler but when the focus moves out of ‘ul’ to its parent the handler is invoked.

quirksmode has a detailed article on mouse events and the intricacies of the above mentioned issue.

[Last night Nirav and I were discussing about closures and their implementation. I and Nirav often discuss tech. and I’ve been thinking since long to record those discussions. This is the beginning.]

(19:27:32) me: arre.. mereko closure samajh me aagaya.. [I understood closures]
(19:27:37) nirav: wah [gr8!] [my frnd. is always encouraging]
(19:27:44) nirav: isn’t that easy?
(19:27:53) nirav: and interesting at times?
(19:27:53) me: I read closure support in javascript
(19:27:56) nirav: yeah
(19:28:01) nirav: ECMA wala[one defined in ECMA?]
(19:28:03) me: yeah..
(19:28:38) me: to answer ur question.. PHP does’t have native support for closures.. ppl uses clumsy ways to emulate them.. which I don’t think are practicable..
(19:29:15) nirav: and how do they do it?
(19:29:16) me: may be because u can’t return a reference to a function in php..
(19:29:38) nirav: well i don’t need to return reference strictly
(19:29:43) me: they create objects for ‘closure’..
(19:29:46) nirav: i can use indirection using returning type
(19:29:59) me: didn’t get that..
(19:30:05) nirav: yeah, that’s how it is done in java as well
(19:30:06) nirav: i mean
(19:30:12) nirav: e.g.
(19:30:31) nirav: f * function(){}
(19:30:41) nirav: this function is returning function pointer to some closure
(19:30:48) nirav: or block of statement
(19:30:50) nirav: ok
(19:30:51) nirav: ?
(19:30:54) me: yeah,,
(19:30:56) nirav: now
(19:31:07) nirav: instead of returning raw pointer f*
(19:31:29) nirav: i can return a type say t => class t{ void f();}
(19:31:42) nirav: so that client of the method can get t and invoke f
(19:31:57) nirav: its indirect so ugly way of using ‘closure’
(19:32:05) me: hmm…
(19:32:10) nirav: samja? [understood?]
(19:32:14) me: 1 min..
(19:32:20) nirav: in java t would be interface
(19:32:29) nirav: with method f in it
(19:32:57) nirav: because in java you can’t return function pointers/ reference to blocks
(19:34:00) nirav: i was looking at this http://felix.sourceforge.net/
(19:34:05) nirav: sounds interesting
(19:34:41) me: in the above e.g. u return object of type ‘t’ right?
(19:34:49) nirav: yeah
(19:34:50) nirav: right
(19:35:33) nirav: where ‘t’ would be abstract class or interface to be nice
(19:35:47) me: and f() in that object uses the local variables declared in the outer function..
(19:36:23) nirav: yeah
(19:36:50) me: I’ll try that in php.. should be possible..
(19:36:54) nirav: ofcourse
(19:37:10) nirav: this is common;y employed in high level OO langs
(19:37:17) me: hmm..
(19:38:49) me: what are the problems with the method u said..?
(19:39:10) nirav: problems?
(19:39:17) nirav: problem is
(19:39:31) nirav: its pretty verbose
(19:39:50) nirav: and you have to write more and more code defeating the purpose of lambda expressions
(19:40:00) me: ok
(19:40:30) nirav: if i can directly express what i want to do in a lambda expression, i don’t need that ‘t’ and that ‘f’ declaration
(19:41:12) nirav: this way of creating closure is synthetic
(19:41:19) me: hmm..
(19:41:59) me: as in javascript support for inner functions and ability to return function references seems to be a better way..
(19:42:32) nirav: same is the case with python, ruby, scheme and many other lang which supports it
(19:42:38) me: hmm..
(19:42:58) nirav: java apparently supports it with the help of anonymous inner function but there are lot of problems

[Note: treat ‘days’ as ‘months’ here. This was drafted long back] 

Few days back, when we were still mulling over improvising over a manual work-flow in our system, I had some discussions with my Product Manager collecting the details about the existing work-flow and desired improvements. Plain English communication. Product manager, himself being one of the regular user of the system came up with some simple steps towards the goal of easing things. Previous two nights I had been reading “Writing Effective Use Cases” by Alistair Cockburn. A real good book on the subject. Starts with real, easy to understand examples. Good to the extent that these examples are almost sufficient for most projects. So, after conversing with the Product Manager, I sat down to take a note of work-flow that he talked of. Just to keep a track. While writing down the points I could easily relate what I was doing to the usecases described by Alistair. Some steps towards a particular goal from the user’s perspective. Naturally, I realized to divide the work-flow into a set of usecases. Usecases, Ifeel, are clear and concise way of recording the requirements. Advantage being that their structure doesn’t impede our regular thought process on a subject. The structure rhymes very well with how we think about requirements. Following are few immediate realizations I had about the usability of usecases.

  1. Nimble structure results in consistant communication of requirements. Usually, requirements documents either have an overwhelming structure or no structure at all. Both detrimental to a project. I’ve seen long SRSs subdivided into numerous sections each talking about the same goal in a separate manner. I had the *privilege* of working on a project that had some 15-20 pages of verbose SRS at the beginning and six months down the line the SRS remained unchanged and we had a wonderful excel sheet and mantis bug tracker with hidden, changed requirements in them!. Requirements management fiasco? Yeah, nothing better than that. I Also, wonder how can an SRS be made exhaustive in the incipient stages of a project. Are people so smart to analyze all details initially? More importantly, can such an overwhelming structure be produced collectively? or is there a person or two who work on the document in isolation, get it verified with stakeholders, modifying etc? Also, does a hugely structured document offer itself as a mindmap? Not having structure at all is even bad. Requirements communicated through emails and emails only!! No mindmap, No tracking of changes in requirements.
  2. Usecases are simple enough to be put down collaboratively over a project meeting (also casual conversations with the stakeholders). Since we think of scenarios we think of alternatives. Since we put down scenarios on paper others are able to come up with alternatives to those scenarios or possible reuse of usecases. In short usecases provide wings to our analytical self as they relate easily to our thought process.
  3. Easy to estimate and communicate estimates: Estimation is a different subject all together. However, the way we represent Requirements affects the quality of estimation. Assuming an organization that doesn’t have historical estimation data that helps the formal methods of estimation we would want to make the estimation process as intuitive as possible. In such scenarios long, verbose SRS hardly helps us to estimate properly. We have paragraphs of functionality required but not the ‘units’ of functionality required. It is easy to estimate (and also prioritize) in units. With usecases we have a well defined and agreed upon chunk of tasks with alternatives jotted down and failure cases and risks jotted down. So estimating becomes intuitive.
  4. Reduced complexity in eliciting requirements: One inherent feature in usecases is that they speak of one and only one objective from the user’s perspective. The virtual user of the usecase is not multitasking. We have pre-conditions in usecases that specify what should be true for the following scenario to take place. Hence, we are separating the scenario from noise (not ignoring the noise but taking it as a different unit).
  5. Post-conditions help us make sure that the desired state of the system is maintained after the scenario has occurred.

I would be grateful to be enlightened by your experiences with usecases and your views on the same.

Many a times it happens that the code we are maintaining/enhancing is not well structured, not comprehensible, doesn’t use sensible coding practices/guidelines. We should sense a danger when we are not in a position to guarantee that a particular change/fix doesn’t produce any side-effects. Currently, I am managing a code base which has such characteristics. Whenever my manager asks “Is this issue fixed”, I can’t think of a better reply than “Looks like it is fixed”. And not surprisingly, something or other is broken. I read an appropriate analogy to this situation (don’t remember where). It says “Hit a table in Tokyo and a building falls in New York”.

In such cases I feel, we should keep the current system running as it is, branch it out as old_projectname in version control system and start afresh. While you are fixing things on the branch, record them to fix on the new development too. But the most important question is whether the project management is willing to plan for this clean up. Mostly it is not. Because, until the issue is properly communicated to the managers they won’t feel the need for it. And, with strict deadlines and expectations in place it is an herculean task to abstract all problems in manageability and present it to get a buy in for this clean up. It would demand a critical analysis of code to filter out the hidden issues that surface to influence the reputation of a worthy developer. Yes, finally it is the developer who is blamed for bugs and side-effects and not the current state of code.

I feel, this process should occur in small steps at a time. Re-factoring. First build tests for the existing code. Then make one small change at a time. In that way we don’t miss any of the capabilities of existing system.

It is high time managers step into the shoes of developers and understand the importance of maintainability (not for the sake of developers but for the sake of quality, goodwill and progress)

This morning I got an email from Joe, Director of Agile methods and practices in my org. saying that he wants me to co-present a talk in a view to evangelize our development teams regarding CI. I agreed. I am very much impressed by what CI promises, immediate feedback, better quality etc. I don’t have a hands-on experience on CI because the shops I’ve worked in till now were least concerned about best practices. Best practices are pushed enough down in the priority that they are never visible. They were considered as non-revenue generating (forgetting about the value and goodwill that these practices generate).

I put some of my thoughts here.

As I said earlier I haven’t worked on CI earlier except for some reading. I was under the impression that CI requires a server like cruise control until Joe pointed out that it is not necessary. CI is more a practice than a technology. It is a practice wherein we use technologies like configuration management, build tools, testing tools etc. to ensure that bugs during integration are detected sooner than later when it is visible to the end-user or has increased in scope, it disciplines the process of integration by asking developers to build, integrate and run tests as frequently (or better, as atomically) as possible.

Next question that came up to my mind was whether it is mandatory to have automated tests for the benefits of CI to surface. This question raised because it is a fact that in most software shops developers are not used to writing automated tests, neither is there any efforts in that direction. So having automated tests as unavoidable component of CI will drive developers away from the same due to the increased learning curve. After all, writing good tests is not just learning a xUnit framework, it has a whole psychology in itself. Joe replied saying that CI can occur in the absence of automated tests but only a fraction of its benefits will be perceived. There would be a manual testing on integration server that would reveal problems but not as immediately and as exhaustively as well-written automated tests would.

Few days back there was a discussion wherein I put CI as one of the areas of focus and introduced the practice to the audience. One gentleman expressed that CI seems to be good to apply on ‘big’ projects rather than ‘small’ ones. Well, he left to us to think for ourselves what is ‘big’ and what is ‘small’, we didn’t mind doing that! But for someone who hasn’t experienced integration problems (or rather wasn’t sensitive enough to record integration problems), including me, such a question is obvious. What I feel is that it is good to adopt CI irrespective of project size and number of developers working on it. After all Quality is not dependent on those factors either.

The project that I am working on now has the following process:

  • Checkout code from repository
  • Modify/add
  • Verify the changes against the repository status of the scripts
  • If everything is fine, check in the scripts
  • Whenever it is time for a demo, get the latest from CVS, go through the application to find any bugs.
  • Fix the bugs found and update the repository
  • Get the latest code from the repository onto the demo server and present it.

Sounds dangerous and haphazard! but a reality. My current focus is to improve this process bit by bit. Your suggestions are most welcome.