Feeds:
Posts
Comments

Posts Tagged ‘SDLC’

I’ve reblogged here for a reason a post from my other light-reading blog at http://ksriranga.wordpress.com/2014/01/31/how-to-hit-the-bulls-eye-every-time/ for your perusal.

Will catch up with you on the other side.

Begin Post:

How To Hit The Bull’s Eye Every Time?

The myth of 10,000 hours needed to become an ace at anything is busted. Read on to find out how.

Archery_-_Target_Cartoon

One morning a Duke was riding inside the woods together with his men-at-arms and servants when he caught the sight of the usual target of concentric circles painted on a tree trunk and smack inside the center of each was a bolt.

Some distance away, he came up on another tree that too had the target and the arrow in bull’s eye.

He found more trees of the same kind.

“Who is this amazingly skilled bowman?” wondered the Duke. “Fetch him wherever he is. I have to meet him!”

When his retinue looked around they found a young boyish looking man with a bow and bolts. They produced him before the Duke.

“Lad, fear not. Who is the great bowman that had hit the bull’s eye every time? Do you know him?”

The young man shook his head to say he did know who did it.

“Is it your father?”

He shook his head again this time to say it wasn’t him.

“The teacher from whom you’re perhaps training?”

It wasn’t him either.

The Duke persisted with his query.

Finally the young man mumbled it was him that shot the bolts plumb inside the middle of every last one of targets.

The Duke laughed aloud: “I know – you didn’t essentially stroll up to the targets and sledge the shafts into the middle, did you?”

“No, my master. I shot them from 100 paces. I swear it by all that I hold holy.”

“That is really astounding, you’re the best archer I’ve ever seen. I herewith appoint you as a trainer to my archers.”

The young man thanked the Duke profusely.

“I still have a question to ask of you. How did you get to be so good hitting the bull’s eye every time? Did you spend all day practicing? If you’re so good, your teacher must be a wizard. Take us to him, will you?”

Archery

“No, No. It is like this. I stand upright, take a careful aim, hold my breath, see it with one eye and shoot the arrow at the tree.”

“Well, that’s what we all do too.”

“And then draw the target circles around where the bolt went into the tree.”

End Post .

Well, jest aside, it is the same thing with some projects. Result is what happens.

Result is not what is committed to the management or to the customer. And usually there are enough good reasons to explain why it happened. Such as extraneous environmental factors like weather or political sensitivity like a civic disturbance. Such as a customer deliberately or otherwise taking advantage of a poorly worded contract to expand the scope or demand services beyond originally envisaged. Etc. Etc.

The project manager is expected to foresee at least some of these risks and plan out mitigation strategies. If it is not done so or if the mitigation strategies are not effective, it is strongly advisable for the project manager to stop the continuing week-after-week agony, step back, rethink and re-plan with customer’s help.

What one finds, instead, is insufficient thinking and action to contain to whatever extent the impact of such uncontrollable developments. These are protrayed as given and the project lives from week to week.

If the factors are absolutely immitigable, then these must be factored into the contract and the original plan. Example: doing field survey in monsoon or severe winter. Though uncommon, I’ve seen projects executed in a start – stop mode these factors permitting. Another interesting example is where a more fundamental change in approach was needed as a solution to insurmountable problems – given the difficulties in freezing requirements with users in advance despite best efforts, the very methodology of developing software has undergone a transformation now using agile methods for success.

Barring those outliers (usually the R & D kind), can project outcomes also be as certain as taxes and death? We may not be there yet, but it certainly helps a long way if project management does not let reasons override results.

End

Read Full Post »

 

The business tsunami must have already hit the shores of IT service providers by now. In peace times it is always useful to understand how well is our customer doing in his market place, his points of success and vulnerabilities, and add high-impact value to our services. However, in war times such as this, the universal common-sense trend is to fold inwards on costs and productivity. Initiatives to help our customer along this direction would go a long way to forge a stronger relationship. It would not be a smart idea to do nothing maintaining status-quo or get our strategies mixed up.  

 

If we have been servicing our customer for sometime now, it shouldn’t be too difficult for us to see where these opportunities lie. It’s time to make those moves if it is not already too late.

 

Cutting down on waste is an effective strategy to reduce costs and boost efficiency. There is an excellent article by Bill Kastle in iSixSigma magazine (1) on how to cut waste in Financial Services. This post distills some experiences in IT services industry. Here we go:

 

Waste 1: Over-Communication

 

In a production model where the software development involves the long chain of end-users, user-specifier, analyst, designer, coder, tester, user-acceptor, there is communication at every step. A number of system models are in use to support this communication with clarity and efficiency, depending on what is communicated.

 

IT services are often rendered using a model of a mix of onsite and offshore resources.  In some cases, when there are no onsite resources deployed by the offshore service provider, the offshore team directly interacts with the customer’s project manager. In either case, effective communication between the two geography-separated teams is absolutely essential for the success of service delivery and holding regular telecons is a well established practice. A telecon is lot more efficient and time-crunching than exchanging multiple rounds of lengthy emails and much greater clarity is obtained on each other’s views.

 

On the negative, these telecons go on for long hours at inconvenient schedules (due to time differences) for the customer while soaking up a good amount of most productive morning time of the offshore team and is often not sharply focused. Also some more time is consumed in assembling before and returning to desk after the telecon.

 

Assuming there is a diktat that from tomorrow a telecon that lasted for one hour would be limited to 45 minutes by an automatic timer, how would I, as the host initiating the telecom, manage? Some simple steps, really:

 

– I would plan the structure of the telecon by listing the items of priority naming participants and collecting the background material. 

– Communicate the plan and the relevant material to all relevant participants.

– With this prior preparation, tightly orchestrate the call, keeping to the subject on hand and limiting the participation only to those who need to be at each step.

 

If a project of 5 team-members holds 3 hour-long telecons in a week with 3 participants per telecon, a reduction in its duration by 15 minutes yields 1%+ savings of productive hours, not to speak of the customer going to bed 15 minutes earlier and reduced telephone bills!

 

There are times one has to be innovative beyond achieving this basic efficiency in communication. On one occasion, a customer brought up his tale of woe: he had to have these daily telecons with the offshore team that kept him awake late into the nights and sapped his energies. He was building an application, with the help of an offshore team, for managing documents of legal cases and the associated workflow in a law firm. There was an enormous variety in those cases and also in the following workflow. And it took him a lot of time to describe these requirements to the analyst on this side who was trying hard to make sense out of the talk (nevertheless, a vast improvement over the earlier arrangement where he had to make the techies understand him). The customer candidly admitted that he usually missed out a few points in the welter of details discussed, which with great fidelity were missed out as well in the software delivered. And the QA marked them later as defects in the software.

 

Does the above tale of woe point to some ways of reducing the burden of communication and also incidentally reduce defects?

 

Yes, there is at least a couple that could improve matters.

 

Firstly, even in a telecon, clearly a more structured way of articulating the requirements could be deployed. For instance, if screens are used mainly to specify user activities, the requirements may be tightly sequenced as: a) for each field, description (syntax and semantics), default value, display control, filed-level validation b) screen-level (or inter-field) validation and computation and c) for each event (not covered under a) and b) its description, and processing. Templating the requirements thus cuts out free-style (and wasteful) communication and also safe-guards against omissions and lack of clarity very early in the SDLC. This holds regardless of development paradigm: waterfall, agile, etc.

 

This is in a way no different from the traditional method of systematically capturing the user needs in a comprehensive requirements document, instead of catching them on the run as is wont now.

 

As an aside, this is also a potential source for quality problems because the testing team, by not being part of these telecons, is never first-hand aware of the requirements or changes agreed with the customer during these telecons. This disconnect shows up as inadequate coverage in testing or, worse, wrong test cases. A mitigating practice is to at least capture crisply the highlights of the telecom in a follow-up shared communication.

 

Secondly, a familiarity if not knowledge of the application domain significantly helps in reducing the communication burden in regard to understanding requirements. While the luxury of a domain expert may not be available on most occasions, alternatively a short capsule of training program in the problem domain for the analyst greatly facilitates faster comprehension of the requirements. This compromise approach is vastly under-rated or untried. In practice, it has always significantly speeded up capturing of the requirements more completely and clearly.

 

Another aid to easier understanding of requirements: In many projects, short or long, developing a glossary of entities, rules, validation, exceptions, etc. capturing the semantics of the application supported by cross-references is a very useful aid for the same purpose of easy comprehension and also for conserving application knowledge gained over time, obviating the need for verbose documentation. Going beyond the requirements gathering phase, a good glossary is an effective communication tool in all phases of a project. And the nice thing about it is that it could evolve incrementally starting from scratch as more facts are uncovered. This, in practice, has not received the attention it should.  

 

Sometimes the telecon consciously moves away from tasking or problem solving or reviewing mode into a rambling mode merely to establish or strengthen the rapport between the teams on both sides. Or, junior members are called in to watch and learn how to interact with customers. These are planned departures from the course of efficiencies espoused herein.

 

We have looked at only one kind of communication – telecon, and how to bring efficiency and quality therein. The structure of the telecon would depend on its purpose and would evolve over a few sessions. Where the telecon is to explain user requirements, we also saw some approaches to prepare ourselves better in reducing the burden of communication and achieve enhanced quality too. The processes of communication in IT services are pervasive and substantial in a project with enough opportunities to cut out waste at every step and to boost quality.

 

Can we shorten the long service chain of end-users to user-specifier to many in the IT service provider organization and back to user-acceptor? Also can we pare the written-down documentation to a minimum?  These steps should in principle reduce the over-all load of communication. That’s a subject for another day, with implications much beyond communication load.

 

Enriching views and experiences from you, in concurrence or contrary, are most welcome.  

 

To continue on other themes for reducing waste…

 

Ref: (1) http://finance.isixsigma.com/library/content/c040324a.asp

Read Full Post »

 

Recently I learnt about an alarming situation regarding a system rolled out in a utility service for rendering emergency help to public users. It is quite imaginable that similar conditions may be prevailing in number of other towns in various utility operations.

 

It appears that these systems are not developed centrally under controlled processes as one would have thought, but more as local initiatives. And these are developed with the help of freelancers or part-time professionals and not with employees on rolls. This is understandable for reasons of skills shortage and also costs involved. And dealing with an individual developer is a lot easier than dealing with a service provider company especially for carrying out changes. In this mode of development, the changes, more often than not, are occasioned by requirements originally missed out due to lax ways of specifying rather than requirements evolving over usage. So it is not unusual to see a spate of changes lined up soon after the roll-out.

 

The code rolled out in the production server does not match with the code on the development server. Perhaps some code pieces are lying somewhere on the development server not easily traceable, or they are lost, or some changes were made directly on the production server. The seriousness of the situation has not sunk in yet. The ability to maintain and enhance the software is seriously compromised. For those of us who have been in this business long enough, this is not unusual. But what is scary is these are public-facing applications rolled out in the field, whose malfunction could cause personal injury or, worse, loss of life.

 

Further, there is no concept of version control or a sand-box for thorough testing before roll-out. It is not known if the system is piloted first before the full roll-out to iron out wrinkles.

 

As if this is not enough, I understood the freelancer has gone away for reasons not known. May be it was a cost cutting measure or the freelancer hiked his fees or he had to put distance between himself and the mess that was created (may not be entirely his doing). Needless to say there has been no formal hand-over process. When even the live-code inventory is incomplete, code or system documentation is too much to ask.

 

Into this scenario, the freelancer’s replacement walks in. Can you imagine his/her plight? And, there is a pile of pending enhancement requests waiting for him/her to take up right away. What it takes to maintain software is not always appreciated. What is at stake is not the individual’s performance, but safety of public users for whom the system is in operation.

 

Since public safety is involved, it is important that some minimal norms are enforced for developing and subsequently maintaining these systems and, the compliance to these norms are audited from time to time. May be these norms and audits are already an established practice in some  geographies.

 

This is not a call against freelancers who deliver great value, flexibility and responsiveness (and skills too) required by users whose requirement for software development is light and sporadic. It is a call for controlled processes for developing software especially for moderate-to-high impact applications.

 

Processes must be recognized as a mandatory piece of any scenario wherein software is developed or used for a serious application. The processes are not meant only for the developers; importantly, they also discipline the users (not the end-using public) who commission the developers and whose laxity is also often the root cause of the ensuing mess. It follows that both the developers and the users must be educated on the processes to be followed.

 

Of course, the processes must be right-sized to suit the customer and the application. Else, they could enormously slow down the software development efforts. Admittedly, right-sizing processes may not be a trivial exercise. Further, it is not required to induct a full set of processes which could be quite intimidating for a light-load user. It is sufficient to implement some key processes at a level of simplicity that serves the purpose. 

 

This could be done with some minimal professional help. May be there are sources from which it is possible to buy right-sized processes; and, get the freelancers to follow them.

 

On part of the software service providers, there is money on the table for those who come up with innovative models to develop and service systems for these light-load customers at affordable TCO without compromising on the norms. 

 

Of course, an off-the-shelf package solution significantly mitigates the risks and must be strongly considered as a solution over custom development.

Read Full Post »

 

A question I pop up often at software professionals is how do you evaluate a OO design. We assume presently functional completeness of the design is not in question.   The responses are interesting and various. They usually circle around: How well encapsulation, polymorphisms…are implemented in the design. How well reusability is used…. And some get into OO metrics.

 

I rarely get countered that the question is a wide open one; there are several aspects (some 20 plus non-functional attributes) to a design and which one do I have in mind for evaluating a design. After all design is a model for realizing both functional and non-functional user requirements.

 

If I were asked to be more specific on what is my chief concern in regard to design, I would then say it is the basic ability of the software to take in changes to its functionality over time. Changes to the functionality implemented in software are inevitable owing to the way an organization responds to internal and environmental shifts. With some software, these changes are easier to make and in some, it is gut-wrenching.   And, today, a good part of any IT (non-Capex) budget is spent on getting its software to change in step with the business needs.

 

So the concern over the software design being able to take changes in its stride is legitimate and important enough to say: the design that permits changes to be made more readily with less effort is a better design. Is this all about the usual non-functional attribute of ‘maintainability’? May be, in parts. I would like to think about it more as a legitimate evolution of the software while ‘maintenance’ connotes status quo. And today, the pace of this evolution has quickened even in ‘stable’ businesses.

 

Now let us proceed to figure out what possibly could be the criterion for evaluating the design from this perspective. This could also be turned on its head to ask how does one design that readily accommodates changes.

 

OO is already touted as a paradigm which is well suited to handle changes. Why? Because of its concepts such as encapsulation, inheritance, interface mechanism (?), etc. are suited to cope up with changes. Obviously, whichever design uses these features heavily, as shown by appropriate metrics or otherwise, is the way to go? 

 

This misses a crucial point. The initial functional requirements demand a set of abstractions. The design is best done by recognizing these abstractions and aligning its abstractions with the same. This is the true purport of all those OO guides that tell us how to identify candidate classes by listing out nouns from the problem description… If this is done as it should be, the initial alignment is ensured. This still does not guarantee the design as capable of coping up with changes to come.

 

The same principle applies to changes. Changes also demand a set of abstractions in the areas of change if they need to be handled later with minimal effort. A design that also aligns its abstractions with those in the areas of change is the one that truly delivers the promise of OO paradigm.

 

So the key to good design seem to lie outside of design phase! It is in the phase of assessing requirements; and, importantly, how these requirements would change in the foreseeable future. While we do a good job of the former, the latter has no place in our practice as yet! Not aware if formal methodologies for gathering and modeling requirements call for attention to this aspect. Is there a section distinctly devoted in the requirements document to foreseeable evolutionary changes? Not in 9+ cases out of 10. Not a wonder our systems are not well equipped to adapt to flow of time?

 

The software development community could come up with: “How can we foresee changes to come? If we could, we would provide for it from go.” This is strictly not true in all cases. It is not too difficult to figure out with the users which parts of the business processes are apt to change, if only we bring our questions to the user’s table specially targeting the future. Some are obvious in the trade and these are well taken care of even now.

 

Examples:

 

Tax laws: These could change from time to time.

 

Sales-person’s incentives or commission: The scheme for incentivising sales-persons changes from time to time even mid-year depending on the business objectives. In a healthy quarter, getting new clients may be important and in a sluggish quarter, mining current accounts may be the priority. Clearly the scheme needs to be abstracted.  

 

However, plans to open a new office, to start a new distribution channel, to introduce new pricing policy or new service offerings, to acquire a company…may not be uncovered in a routine study of requirements, the focus being on the present. Only a targeted probing with users may bring out these and other possible change triggers.  A word of caution is: the average user we engage with may not be wise to some of these plans!

 

In summary, a formal and focused business volatility analysis could be carried out with users at different levels of the organizational hierarchy so that the abstractions required by the business now and in future (to the foreseeable extent) are identified and the design abstractions are appropriately set up. The design abstractions could range form simple parameterization to more refined OO and variability techniques. The mode of deploying the changes also influences the choice of design technique.  

 

In fact it is a good idea to include a discussion on how the design would be impacted by anticipated and unanticipated changes in the user requirements: would the design abstractions take them in their stride elegantly or would it cause major upheavals. One recalls how in Operations Research, the algorithms provide for sensitivity analysis to figure out the impact on the computed solution if certain conditions were to change. Incidentally an earlier ‘Change Management’ post talks about the sensitivity of effort estimates to changes in user requirements.  

 

Is this a non-issue with packaged solutions like ERP? No, it is still an issue, perhaps to a lesser degree. Configuring a ERP solution for the current business practice is not a trivial effort. And when there are changes to current practice, reflecting these changes could turn out to be a minor or a significant effort depending on the degrees of freedom in the initial lay-out. For instance, consider organizations that frequently reorganize their operations – divisions and departments merge and split, get centralized and decentralized…The ERP could be elegantly re-configured for all these changes or it could be a snake’s pit depending on how it was set up initially.     

 

As an aside, abstractions in the requirements gathering phase may also be necessitated for an entirely different reason – the involved users may not be clear or articulate about their needs at that point of time or the scenario is in some kind of flux. These may get fleshed out later. Design abstractions must be able to cope up with these too. 

 

All along the software architects and the designers were required to think of abstractions. Now are we wanting our Business Analysts also to get into the groove? Yes, that’s the drift. 

 

How do we build systems for businesses which are intrinsically very volatile? Will look at it in a post to follow.

Read Full Post »

‘Projects, like diamonds, are forever’
‘It doesn’t turn out to be what you had wanted’
‘Systems are so inflexible, even a minor change is two months away’
‘IT guys don’t stick around’

With these kinds of perceptions and assessments of end-users, it is clearly thumbs down for custom software development projects. Software continues to be custom developed at least where a) Stable, good-fit and cost-effective ready solutions have not yet emerged. b) If the applications are in some special niches. Or c) Currently available solutions do not leverage newer technologies that bring significant benefits. All of these still add up to substantial action in custom software development. Hence, the pressure on IT professionals to constantly hone up SDLC skills and practices to do better than ever before in the face of all these negative assessments and perceptions.

In this piece and others to follow, we will take up some simple incremental ways of perceptibly adding significant value, which would not appreciably alter the prevailing basics.

Any application provides its user with some capabilities relevant to business. When a user exercises a capability, he derives definite value resulting out of a transaction (a transaction is actually an exchange of values). The capability is delivered by the application by initiating a corresponding main service process which in turn invokes or works in tandem with other services to get the job done. Thus an e-store gives a shopper the capability to shop on the net. The direct tangible value he derives is the goods ordered and delivered at home. The balancing value that the store derives out of the transaction is the payment user makes for his purchase.

Here are some easy opportunities for enhancing value:

Let us look little more closely at the process of detailing out the requirement in providing a capability to the user. The requirement could be captured formally as a use-case or it could be by way descriptive text. The discussion following assumes the way of use-cases without any loss of generality.

While the primary user or beneficiary of a use-case is quite evident, it is not so with secondary users. With some dogged drilling down, a number of secondary users may be uncovered who could possibly derive some value out of this capability. The secondary users may not even have a direct role in the use-case; they may merely be recipients of alerts. For instance, when a new employee joins, an alert could be generated to various other functions and systems to induct him. The infrastructure function may be alerted to provision him with computing resources. The administration function may be alerted to arrange for security badges, etc. The secondary users exercise their own capabilities to get their jobs done, but the alerts are useful to ensure the ball is not dropped especially at process breaks, ubiquitous in organizations. And all of this is realized with only a little more of digging.

Practice: Examine a use-case carefully to uncover more secondary users who could possibly derive value. It could even be in the form of alerts.

The same principle may also be applied to the application as a whole, seen as providing a collection of capabilities to different users. A quick view of these capabilities sorted by their main beneficiaries or users will usually show a skew. This immediately prompts a question: Is the skew intended? Example: The application administrator, by design, has capabilities limited to administration of the application. Or, it could show up an opportunity for empowering some of those cinderellas with more capabilities in this application? And the effort involved in realizing these additional capabilities may only be incremental. This may even trigger a larger and beneficial debate on the user’s role as envisaged now in the oranization and its possible enrichment!

Practice: Construct a view of capabilities arranged by users. Examine this view to spot opportunities for strengthening user empowerment (righting the skew).

More to follow…

Read Full Post »