Feeds:
Posts
Comments

Posts Tagged ‘Change Management’

 

Recently I learnt about an alarming situation regarding a system rolled out in a utility service for rendering emergency help to public users. It is quite imaginable that similar conditions may be prevailing in number of other towns in various utility operations.

 

It appears that these systems are not developed centrally under controlled processes as one would have thought, but more as local initiatives. And these are developed with the help of freelancers or part-time professionals and not with employees on rolls. This is understandable for reasons of skills shortage and also costs involved. And dealing with an individual developer is a lot easier than dealing with a service provider company especially for carrying out changes. In this mode of development, the changes, more often than not, are occasioned by requirements originally missed out due to lax ways of specifying rather than requirements evolving over usage. So it is not unusual to see a spate of changes lined up soon after the roll-out.

 

The code rolled out in the production server does not match with the code on the development server. Perhaps some code pieces are lying somewhere on the development server not easily traceable, or they are lost, or some changes were made directly on the production server. The seriousness of the situation has not sunk in yet. The ability to maintain and enhance the software is seriously compromised. For those of us who have been in this business long enough, this is not unusual. But what is scary is these are public-facing applications rolled out in the field, whose malfunction could cause personal injury or, worse, loss of life.

 

Further, there is no concept of version control or a sand-box for thorough testing before roll-out. It is not known if the system is piloted first before the full roll-out to iron out wrinkles.

 

As if this is not enough, I understood the freelancer has gone away for reasons not known. May be it was a cost cutting measure or the freelancer hiked his fees or he had to put distance between himself and the mess that was created (may not be entirely his doing). Needless to say there has been no formal hand-over process. When even the live-code inventory is incomplete, code or system documentation is too much to ask.

 

Into this scenario, the freelancer’s replacement walks in. Can you imagine his/her plight? And, there is a pile of pending enhancement requests waiting for him/her to take up right away. What it takes to maintain software is not always appreciated. What is at stake is not the individual’s performance, but safety of public users for whom the system is in operation.

 

Since public safety is involved, it is important that some minimal norms are enforced for developing and subsequently maintaining these systems and, the compliance to these norms are audited from time to time. May be these norms and audits are already an established practice in some  geographies.

 

This is not a call against freelancers who deliver great value, flexibility and responsiveness (and skills too) required by users whose requirement for software development is light and sporadic. It is a call for controlled processes for developing software especially for moderate-to-high impact applications.

 

Processes must be recognized as a mandatory piece of any scenario wherein software is developed or used for a serious application. The processes are not meant only for the developers; importantly, they also discipline the users (not the end-using public) who commission the developers and whose laxity is also often the root cause of the ensuing mess. It follows that both the developers and the users must be educated on the processes to be followed.

 

Of course, the processes must be right-sized to suit the customer and the application. Else, they could enormously slow down the software development efforts. Admittedly, right-sizing processes may not be a trivial exercise. Further, it is not required to induct a full set of processes which could be quite intimidating for a light-load user. It is sufficient to implement some key processes at a level of simplicity that serves the purpose. 

 

This could be done with some minimal professional help. May be there are sources from which it is possible to buy right-sized processes; and, get the freelancers to follow them.

 

On part of the software service providers, there is money on the table for those who come up with innovative models to develop and service systems for these light-load customers at affordable TCO without compromising on the norms. 

 

Of course, an off-the-shelf package solution significantly mitigates the risks and must be strongly considered as a solution over custom development.

Read Full Post »

 

A question I pop up often at software professionals is how do you evaluate a OO design. We assume presently functional completeness of the design is not in question.   The responses are interesting and various. They usually circle around: How well encapsulation, polymorphisms…are implemented in the design. How well reusability is used…. And some get into OO metrics.

 

I rarely get countered that the question is a wide open one; there are several aspects (some 20 plus non-functional attributes) to a design and which one do I have in mind for evaluating a design. After all design is a model for realizing both functional and non-functional user requirements.

 

If I were asked to be more specific on what is my chief concern in regard to design, I would then say it is the basic ability of the software to take in changes to its functionality over time. Changes to the functionality implemented in software are inevitable owing to the way an organization responds to internal and environmental shifts. With some software, these changes are easier to make and in some, it is gut-wrenching.   And, today, a good part of any IT (non-Capex) budget is spent on getting its software to change in step with the business needs.

 

So the concern over the software design being able to take changes in its stride is legitimate and important enough to say: the design that permits changes to be made more readily with less effort is a better design. Is this all about the usual non-functional attribute of ‘maintainability’? May be, in parts. I would like to think about it more as a legitimate evolution of the software while ‘maintenance’ connotes status quo. And today, the pace of this evolution has quickened even in ‘stable’ businesses.

 

Now let us proceed to figure out what possibly could be the criterion for evaluating the design from this perspective. This could also be turned on its head to ask how does one design that readily accommodates changes.

 

OO is already touted as a paradigm which is well suited to handle changes. Why? Because of its concepts such as encapsulation, inheritance, interface mechanism (?), etc. are suited to cope up with changes. Obviously, whichever design uses these features heavily, as shown by appropriate metrics or otherwise, is the way to go? 

 

This misses a crucial point. The initial functional requirements demand a set of abstractions. The design is best done by recognizing these abstractions and aligning its abstractions with the same. This is the true purport of all those OO guides that tell us how to identify candidate classes by listing out nouns from the problem description… If this is done as it should be, the initial alignment is ensured. This still does not guarantee the design as capable of coping up with changes to come.

 

The same principle applies to changes. Changes also demand a set of abstractions in the areas of change if they need to be handled later with minimal effort. A design that also aligns its abstractions with those in the areas of change is the one that truly delivers the promise of OO paradigm.

 

So the key to good design seem to lie outside of design phase! It is in the phase of assessing requirements; and, importantly, how these requirements would change in the foreseeable future. While we do a good job of the former, the latter has no place in our practice as yet! Not aware if formal methodologies for gathering and modeling requirements call for attention to this aspect. Is there a section distinctly devoted in the requirements document to foreseeable evolutionary changes? Not in 9+ cases out of 10. Not a wonder our systems are not well equipped to adapt to flow of time?

 

The software development community could come up with: “How can we foresee changes to come? If we could, we would provide for it from go.” This is strictly not true in all cases. It is not too difficult to figure out with the users which parts of the business processes are apt to change, if only we bring our questions to the user’s table specially targeting the future. Some are obvious in the trade and these are well taken care of even now.

 

Examples:

 

Tax laws: These could change from time to time.

 

Sales-person’s incentives or commission: The scheme for incentivising sales-persons changes from time to time even mid-year depending on the business objectives. In a healthy quarter, getting new clients may be important and in a sluggish quarter, mining current accounts may be the priority. Clearly the scheme needs to be abstracted.  

 

However, plans to open a new office, to start a new distribution channel, to introduce new pricing policy or new service offerings, to acquire a company…may not be uncovered in a routine study of requirements, the focus being on the present. Only a targeted probing with users may bring out these and other possible change triggers.  A word of caution is: the average user we engage with may not be wise to some of these plans!

 

In summary, a formal and focused business volatility analysis could be carried out with users at different levels of the organizational hierarchy so that the abstractions required by the business now and in future (to the foreseeable extent) are identified and the design abstractions are appropriately set up. The design abstractions could range form simple parameterization to more refined OO and variability techniques. The mode of deploying the changes also influences the choice of design technique.  

 

In fact it is a good idea to include a discussion on how the design would be impacted by anticipated and unanticipated changes in the user requirements: would the design abstractions take them in their stride elegantly or would it cause major upheavals. One recalls how in Operations Research, the algorithms provide for sensitivity analysis to figure out the impact on the computed solution if certain conditions were to change. Incidentally an earlier ‘Change Management’ post talks about the sensitivity of effort estimates to changes in user requirements.  

 

Is this a non-issue with packaged solutions like ERP? No, it is still an issue, perhaps to a lesser degree. Configuring a ERP solution for the current business practice is not a trivial effort. And when there are changes to current practice, reflecting these changes could turn out to be a minor or a significant effort depending on the degrees of freedom in the initial lay-out. For instance, consider organizations that frequently reorganize their operations – divisions and departments merge and split, get centralized and decentralized…The ERP could be elegantly re-configured for all these changes or it could be a snake’s pit depending on how it was set up initially.     

 

As an aside, abstractions in the requirements gathering phase may also be necessitated for an entirely different reason – the involved users may not be clear or articulate about their needs at that point of time or the scenario is in some kind of flux. These may get fleshed out later. Design abstractions must be able to cope up with these too. 

 

All along the software architects and the designers were required to think of abstractions. Now are we wanting our Business Analysts also to get into the groove? Yes, that’s the drift. 

 

How do we build systems for businesses which are intrinsically very volatile? Will look at it in a post to follow.

Read Full Post »