Feeds:
Posts
Comments

Archive for the ‘Software Maintenance’ Category

I was camping in a fairly large house, well maintained, surrounded by a number of flowering trees and plants, home to countless birds that treated us to a melodious cacophony announcing their morning foray and home coming in the evening. It was time for the trees to renew themselves – service staff came in the morning and again in the afternoon to sweep off the leaves copiously shed by the tress on the front-yard.  The flowering plants however were still abloom. At times on my touch, a bee would startle me flying out from deep inside the flower.

For one who has lived all his life in Mumbai flats (apartments) where one cannot take ten steps without hitting a wall, one’s auditory nerves constantly assaulted by caw’s of those sullen crows and bark of stray (and house) dogs, this was an overwhelming experience. The spacious front-yard was where I took my mandatory morning and evening walks, my senses enjoying the sights and sounds around.

Get the picture?

The only blot on the scene was the rubble piled up near the neem tree at one corner of the house in the front.  The house owner had not cleared it intending to reuse it in future possibly for patching up parts of the yard.

Yesterday morning, walking near the neem tree I saw a splash of red dried up on the debris. I had not seen it before. Clearly, someone, possibly one of those tradesmen called in for some repair work, had used it as a spittoon after chewing a paan (betel leaf + lime + arca nut shavings + whatever). Unfortunate, but true, in this country one may freely spit in public or even common spaces, but never so within a house. But the perpetrator saw it differently – if the corner was good (?) to pile up the rubble, no one minding, it was ok for him to spit over there.

The ‘Broken Window’ syndrome playing out!

Broken_windows,_Northampton_State_Hospital

From wiki: ‘Under the broken windows theory, an ordered and clean environment, one that is maintained, sends the signal that the area is monitored and that criminal behavior is not tolerated. Conversely, a disordered environment, one that is not maintained (broken windows, graffiti, excessive litter), sends the signal that the area is not monitored and that criminal behavior has little risk of detection.’

A few broken windows, at times even one, left unfixed for some time is a trigger or invitation for many more, if not all, to be broken.

Much is written on this syndrome as a subject of study under criminology and urban sociology.

Outside of crime, the phenomenon may be observed in many other contexts: projects, product development, organizations, communities and even in personal life.

When a project manager leaves unfixed the first infractions on time deadline, quality issues or team indiscipline…, the first window is broken. His team reads it differently. It’s very likely he would, to his grief, witness many more ‘broken windows’ before long on his way down and out.

End

 

Source: wikipedia

Read Full Post »

 

Recently I learnt about an alarming situation regarding a system rolled out in a utility service for rendering emergency help to public users. It is quite imaginable that similar conditions may be prevailing in number of other towns in various utility operations.

 

It appears that these systems are not developed centrally under controlled processes as one would have thought, but more as local initiatives. And these are developed with the help of freelancers or part-time professionals and not with employees on rolls. This is understandable for reasons of skills shortage and also costs involved. And dealing with an individual developer is a lot easier than dealing with a service provider company especially for carrying out changes. In this mode of development, the changes, more often than not, are occasioned by requirements originally missed out due to lax ways of specifying rather than requirements evolving over usage. So it is not unusual to see a spate of changes lined up soon after the roll-out.

 

The code rolled out in the production server does not match with the code on the development server. Perhaps some code pieces are lying somewhere on the development server not easily traceable, or they are lost, or some changes were made directly on the production server. The seriousness of the situation has not sunk in yet. The ability to maintain and enhance the software is seriously compromised. For those of us who have been in this business long enough, this is not unusual. But what is scary is these are public-facing applications rolled out in the field, whose malfunction could cause personal injury or, worse, loss of life.

 

Further, there is no concept of version control or a sand-box for thorough testing before roll-out. It is not known if the system is piloted first before the full roll-out to iron out wrinkles.

 

As if this is not enough, I understood the freelancer has gone away for reasons not known. May be it was a cost cutting measure or the freelancer hiked his fees or he had to put distance between himself and the mess that was created (may not be entirely his doing). Needless to say there has been no formal hand-over process. When even the live-code inventory is incomplete, code or system documentation is too much to ask.

 

Into this scenario, the freelancer’s replacement walks in. Can you imagine his/her plight? And, there is a pile of pending enhancement requests waiting for him/her to take up right away. What it takes to maintain software is not always appreciated. What is at stake is not the individual’s performance, but safety of public users for whom the system is in operation.

 

Since public safety is involved, it is important that some minimal norms are enforced for developing and subsequently maintaining these systems and, the compliance to these norms are audited from time to time. May be these norms and audits are already an established practice in some  geographies.

 

This is not a call against freelancers who deliver great value, flexibility and responsiveness (and skills too) required by users whose requirement for software development is light and sporadic. It is a call for controlled processes for developing software especially for moderate-to-high impact applications.

 

Processes must be recognized as a mandatory piece of any scenario wherein software is developed or used for a serious application. The processes are not meant only for the developers; importantly, they also discipline the users (not the end-using public) who commission the developers and whose laxity is also often the root cause of the ensuing mess. It follows that both the developers and the users must be educated on the processes to be followed.

 

Of course, the processes must be right-sized to suit the customer and the application. Else, they could enormously slow down the software development efforts. Admittedly, right-sizing processes may not be a trivial exercise. Further, it is not required to induct a full set of processes which could be quite intimidating for a light-load user. It is sufficient to implement some key processes at a level of simplicity that serves the purpose. 

 

This could be done with some minimal professional help. May be there are sources from which it is possible to buy right-sized processes; and, get the freelancers to follow them.

 

On part of the software service providers, there is money on the table for those who come up with innovative models to develop and service systems for these light-load customers at affordable TCO without compromising on the norms. 

 

Of course, an off-the-shelf package solution significantly mitigates the risks and must be strongly considered as a solution over custom development.

Read Full Post »

 

A question I pop up often at software professionals is how do you evaluate a OO design. We assume presently functional completeness of the design is not in question.   The responses are interesting and various. They usually circle around: How well encapsulation, polymorphisms…are implemented in the design. How well reusability is used…. And some get into OO metrics.

 

I rarely get countered that the question is a wide open one; there are several aspects (some 20 plus non-functional attributes) to a design and which one do I have in mind for evaluating a design. After all design is a model for realizing both functional and non-functional user requirements.

 

If I were asked to be more specific on what is my chief concern in regard to design, I would then say it is the basic ability of the software to take in changes to its functionality over time. Changes to the functionality implemented in software are inevitable owing to the way an organization responds to internal and environmental shifts. With some software, these changes are easier to make and in some, it is gut-wrenching.   And, today, a good part of any IT (non-Capex) budget is spent on getting its software to change in step with the business needs.

 

So the concern over the software design being able to take changes in its stride is legitimate and important enough to say: the design that permits changes to be made more readily with less effort is a better design. Is this all about the usual non-functional attribute of ‘maintainability’? May be, in parts. I would like to think about it more as a legitimate evolution of the software while ‘maintenance’ connotes status quo. And today, the pace of this evolution has quickened even in ‘stable’ businesses.

 

Now let us proceed to figure out what possibly could be the criterion for evaluating the design from this perspective. This could also be turned on its head to ask how does one design that readily accommodates changes.

 

OO is already touted as a paradigm which is well suited to handle changes. Why? Because of its concepts such as encapsulation, inheritance, interface mechanism (?), etc. are suited to cope up with changes. Obviously, whichever design uses these features heavily, as shown by appropriate metrics or otherwise, is the way to go? 

 

This misses a crucial point. The initial functional requirements demand a set of abstractions. The design is best done by recognizing these abstractions and aligning its abstractions with the same. This is the true purport of all those OO guides that tell us how to identify candidate classes by listing out nouns from the problem description… If this is done as it should be, the initial alignment is ensured. This still does not guarantee the design as capable of coping up with changes to come.

 

The same principle applies to changes. Changes also demand a set of abstractions in the areas of change if they need to be handled later with minimal effort. A design that also aligns its abstractions with those in the areas of change is the one that truly delivers the promise of OO paradigm.

 

So the key to good design seem to lie outside of design phase! It is in the phase of assessing requirements; and, importantly, how these requirements would change in the foreseeable future. While we do a good job of the former, the latter has no place in our practice as yet! Not aware if formal methodologies for gathering and modeling requirements call for attention to this aspect. Is there a section distinctly devoted in the requirements document to foreseeable evolutionary changes? Not in 9+ cases out of 10. Not a wonder our systems are not well equipped to adapt to flow of time?

 

The software development community could come up with: “How can we foresee changes to come? If we could, we would provide for it from go.” This is strictly not true in all cases. It is not too difficult to figure out with the users which parts of the business processes are apt to change, if only we bring our questions to the user’s table specially targeting the future. Some are obvious in the trade and these are well taken care of even now.

 

Examples:

 

Tax laws: These could change from time to time.

 

Sales-person’s incentives or commission: The scheme for incentivising sales-persons changes from time to time even mid-year depending on the business objectives. In a healthy quarter, getting new clients may be important and in a sluggish quarter, mining current accounts may be the priority. Clearly the scheme needs to be abstracted.  

 

However, plans to open a new office, to start a new distribution channel, to introduce new pricing policy or new service offerings, to acquire a company…may not be uncovered in a routine study of requirements, the focus being on the present. Only a targeted probing with users may bring out these and other possible change triggers.  A word of caution is: the average user we engage with may not be wise to some of these plans!

 

In summary, a formal and focused business volatility analysis could be carried out with users at different levels of the organizational hierarchy so that the abstractions required by the business now and in future (to the foreseeable extent) are identified and the design abstractions are appropriately set up. The design abstractions could range form simple parameterization to more refined OO and variability techniques. The mode of deploying the changes also influences the choice of design technique.  

 

In fact it is a good idea to include a discussion on how the design would be impacted by anticipated and unanticipated changes in the user requirements: would the design abstractions take them in their stride elegantly or would it cause major upheavals. One recalls how in Operations Research, the algorithms provide for sensitivity analysis to figure out the impact on the computed solution if certain conditions were to change. Incidentally an earlier ‘Change Management’ post talks about the sensitivity of effort estimates to changes in user requirements.  

 

Is this a non-issue with packaged solutions like ERP? No, it is still an issue, perhaps to a lesser degree. Configuring a ERP solution for the current business practice is not a trivial effort. And when there are changes to current practice, reflecting these changes could turn out to be a minor or a significant effort depending on the degrees of freedom in the initial lay-out. For instance, consider organizations that frequently reorganize their operations – divisions and departments merge and split, get centralized and decentralized…The ERP could be elegantly re-configured for all these changes or it could be a snake’s pit depending on how it was set up initially.     

 

As an aside, abstractions in the requirements gathering phase may also be necessitated for an entirely different reason – the involved users may not be clear or articulate about their needs at that point of time or the scenario is in some kind of flux. These may get fleshed out later. Design abstractions must be able to cope up with these too. 

 

All along the software architects and the designers were required to think of abstractions. Now are we wanting our Business Analysts also to get into the groove? Yes, that’s the drift. 

 

How do we build systems for businesses which are intrinsically very volatile? Will look at it in a post to follow.

Read Full Post »

 

(contd.)

 

Before we get to the core, I must mention about an interesting solution that kind of addresses concerns voiced about mixing up business logic in Processing Reports. Recently I came across a case study where a MNC had the usual problem of reporting from a variety of dispersed and disparate sources such as Excel sheets, legacy ERP systems, etc. The reports were quite complex. And, there was no single container to host all the processing logic. The Organization deployed an ETL software that fitted the bill and loaded the data into a Reporting Database! This SQL database was used purely for reporting purposes only. On this database, they also built their business logic as a uniform interface and pulled out their reports. This architecture certainly fixed the problem of scattered business logic. The data is still transactional and the database was not a data-warehouse kind.

 

Even when there is a container application like an ERP instance to host the business logic and the reports, this solution may become an alternative meriting serious consideration when the reports are quite complex. The downsides to this approach are: a) introduction of an intermediate step and possible time delays b) the output business logic is still separated from transaction related business logic c) views may be generated from the ERP instance or from the Reporting Database – the challenge of making them look alike and d) the reports have to be coded explicitly instead of using the ERP-native report-generator. And, all the associated maintenance issues.

 

How would I, as an IT professional (not as a management consultant), go about if I need to rationalize the output system of reports (and views) for maximum business impact? An exercise that may be applied to a system of reports that exists already or to reports that are being planned for a new application. While some steps are obvious, some are not. The obvious steps (especially in a IT-mature organization) are included here for completeness:

 

         Compile a list of reports that need to be subjected to this exercise of rationalization.

 

         Develop the business purpose of each report. Weed out duplicate ways of stating the same purpose. Qualifiers are useful in generating variants of a business purpose: Shows payment-pending invoices by Region, Shows payment-pending invoices by Office, Shows payment-pending invoices by Customer, Shows payment-pending invoices by Product-line, and Shows payment-pending invoices by Period, etc. 

 

         One may or may not have the option of meaningfully (re)naming the reports pointing to their purpose.  

 

         Do a preliminary check if the information content of the report supports the business purpose. The depth of this check depends on IT professional’s knowledge of the business and best practices in the domain. 

 

         Generate a reference matrix showing reports and their users. These users are grouped under their functional silos: Finance/Accounts-Receivables, HR/Payroll, etc.

 

         Classify the users for each report: He may be a ‘direct’ or ‘responsible’ user using the report for managing his operations; Or a ‘supervisory’ or ‘accountable’ user using the report to review the operations with his team. An ‘informational’ user is merely informed by the report. This simple classification is adequate for most purposes.

 

         Revisit each report with its direct and the supervisory users. Validate the business purpose, the information content and the format – the format aspect of a report, though quite important, is not pursued further in this blog. There are some interesting and powerful opportunities at this step to restore true value: a) Check if the report is directly used by the user as such or if he does further processing to make it usable for its intended purpose. Very often, it is found, user makes some additional massaging to the numbers in the report: a missing summary or computing a ratio or a KPI, a comparison with a past period, etc. A significant efficiency contribution would to be to cut out this massaging b) More complex massaging is usually carried out in Excel. Can this be done away with or at least seamlessly integrated? c) This is an opportunity to ‘hard’ reconcile the supervisory perspective of a business aspect with the direct operational perspective. A no-brainer simplification is to ensure the Transaction Report goes to operating personnel and the related Summary Report goes to supervisory personnel and d) Review the list of ‘informational’ users of this report and reasons for their inclusion or exclusion. Mark candidates for inclusion/exclusion.

 

         These done, take the discussion to the broad plane of user’s responsibilities how the reports support those responsibilities. This would reveal those ‘missing’ views and reports – potential for creating value. It is not unusual to find system outputs not covering the entire breadth of user’s responsibilities or his KPI’s.

 

         Review with each informational user, the list of reports he receives and his thoughts on inclusions and exclusions. Go back to the direct and supervisory users of the reports to finalize the ‘informational’ inclusions and exclusions. At this point, the report may even undergo some changes to suit the needs of the informational users or some missing reports may again be uncovered.

 

         Note that a report with multiple ‘responsible’ users especially from different functional silos strongly indicates multiple business purposes stated or omitted.  And a report with multiple purposes is a strong candidate for splitting.

 

         Multiple reports for same or related purposes are good candidates for merging. When the business purpose is quite specific (not generic like ‘highlights cost-overruns’) their distribution lists could still be different if they present different perspectives. Do they?   

 

         Develop an exhaustive list of abnormal events that could occur in each functional silo and across silos. Relate each event to the Exception Report that shows it up. This may reveal events with potentially serious consequences being passed through. It is also important to check a) if these events are pointed to at the earliest possible instant after their occurrence b) the reporting intensity is commensurate with the degree of abnormality and c) recipients of the reports include all users concerned with the origin of the events and the organization’s consequent responses. Without sufficient care here, process breaks could severely impair the organization’s ability to respond.

 

         A report-type view of the system of reports also throws up useful but gross pointers to some imbalances. Absence of Log Reports may readily indicate holes in statutory compliance or weaknesses in security audit procedures and in some cases even recovery capabilities. Few Exception Reports may point to, as we have already seen, a failure to flag down significant abnormal events in the operations and the ability to quickly respond. Are the operating and supervisory personnel adequately (not overly) serviced, covering their major responsibilities and accountabilities with Transaction, Processing and Summary Reports? Similarly, are business purposes adequately and powerfully supported? Are functional silos bridged optimally?

 

It would be interesting to see if some principles of project portfolio management could be carried into this exercise of rationalizing system outputs. 

 

Like we have rigor in design of database (ERD, normalization…), this appears to be a ripe candidate for proposing a formal model and practice both for design and importantly for ongoing review.  

 

In summary, rationalizing the system outputs has ready pay-back in terms of managerial effectiveness by: a) re-engineering the outputs for maximum out business impact and operational efficiency b) weeding out redundancies in outputs as well as their distribution c) discovering opportunities for filling gaps and creating value for the business and d) making up for debilitating process breaks.

 

Importantly, note that IT application boundaries, their technology platforms or deployment architecture do not pose any problems in carrying out this exercise. Since change is constant with businesses, this cathartic effort is not likely to be single-shot.  

 

A potential service offering of real value from CIO’s stable? It has a quick turn-around and for most part may not need face-to-face or travel.

 

(concluded)

 

Read Full Post »

I recall how a leading multinational hi-tech company applied triaging in planning releases of a key international business application. The basic question was: Which enhancements from the backlog should be taken up for the next release? If the answer to this simple and basic question was not arrived at in a fair and logical and consensual manner, it could lead to a lot of heart-burn in different pockets of the company, not speaking of the misalignment with the company’s established priorities.

The governance mechanism established over-all principles (or it could be the CEO setting the credo for the company) in defining which attributes mattered and how much. For instance, ‘Customer Satisfaction (back then, ‘Customer Experience’ was not yet in currency) is the number one priority for us this year, followed by Internal Operational Efficiencies.’ Statements such as these act as the guideposts for actions and decisions in a variety of situations, and more specifically in this context, helped in putting together a base set of attributes and defining priority weights for these attributes consensually with in the company. Obviously these weights were subject to revision with passing time and changing priorities. Presently, we will ignore the nag who heckles us with: ‘Can I trade 2 units of Efficiencies for 1 unit of Satisfaction?’

Representative users of the application (it was a global application for the company, remember?) assembled and heard out a short presentation on the what’s and why’s of each enhancement tabled for that release. Immediately after the presentation and clarifications, they voted as to how the enhancement contributed towards each attribute on a simple scale of 1 to 5. Each attribute score was averaged over all voters and the composite score was computed for the enhancement, combining the attribute scores with respective pre-defined attribute weights. This was repeated for all enhancements planned for that release. The enhancements were now sorted high-to-low on their composite scores. Effort estimates for each enhancement were available apriori, as also the total available effort capacity for the release. Now, the enhancements were taken up in the order of their scores and assigned to the planned release up to the point of using up the available effort capacity. There were, of course, some enhancements which for a good reason ‘jumped the queue.’ This process also pointed to any augmentation or cut-back to available effort capacity needed to pack more or less in a release.

Most companies follow similar processes in planning their application releases amidst conflicting push-pulls.

(Note: The above is a fairly generic description of the triaging process without any client specific information.)

Read Full Post »