Feeds:
Posts
Comments

Posts Tagged ‘Requirements Gathering’

I’ve reblogged here for a reason a post from my other light-reading blog at http://ksriranga.wordpress.com/2014/01/31/how-to-hit-the-bulls-eye-every-time/ for your perusal.

Will catch up with you on the other side.

Begin Post:

How To Hit The Bull’s Eye Every Time?

The myth of 10,000 hours needed to become an ace at anything is busted. Read on to find out how.

Archery_-_Target_Cartoon

One morning a Duke was riding inside the woods together with his men-at-arms and servants when he caught the sight of the usual target of concentric circles painted on a tree trunk and smack inside the center of each was a bolt.

Some distance away, he came up on another tree that too had the target and the arrow in bull’s eye.

He found more trees of the same kind.

“Who is this amazingly skilled bowman?” wondered the Duke. “Fetch him wherever he is. I have to meet him!”

When his retinue looked around they found a young boyish looking man with a bow and bolts. They produced him before the Duke.

“Lad, fear not. Who is the great bowman that had hit the bull’s eye every time? Do you know him?”

The young man shook his head to say he did know who did it.

“Is it your father?”

He shook his head again this time to say it wasn’t him.

“The teacher from whom you’re perhaps training?”

It wasn’t him either.

The Duke persisted with his query.

Finally the young man mumbled it was him that shot the bolts plumb inside the middle of every last one of targets.

The Duke laughed aloud: “I know – you didn’t essentially stroll up to the targets and sledge the shafts into the middle, did you?”

“No, my master. I shot them from 100 paces. I swear it by all that I hold holy.”

“That is really astounding, you’re the best archer I’ve ever seen. I herewith appoint you as a trainer to my archers.”

The young man thanked the Duke profusely.

“I still have a question to ask of you. How did you get to be so good hitting the bull’s eye every time? Did you spend all day practicing? If you’re so good, your teacher must be a wizard. Take us to him, will you?”

Archery

“No, No. It is like this. I stand upright, take a careful aim, hold my breath, see it with one eye and shoot the arrow at the tree.”

“Well, that’s what we all do too.”

“And then draw the target circles around where the bolt went into the tree.”

End Post .

Well, jest aside, it is the same thing with some projects. Result is what happens.

Result is not what is committed to the management or to the customer. And usually there are enough good reasons to explain why it happened. Such as extraneous environmental factors like weather or political sensitivity like a civic disturbance. Such as a customer deliberately or otherwise taking advantage of a poorly worded contract to expand the scope or demand services beyond originally envisaged. Etc. Etc.

The project manager is expected to foresee at least some of these risks and plan out mitigation strategies. If it is not done so or if the mitigation strategies are not effective, it is strongly advisable for the project manager to stop the continuing week-after-week agony, step back, rethink and re-plan with customer’s help.

What one finds, instead, is insufficient thinking and action to contain to whatever extent the impact of such uncontrollable developments. These are protrayed as given and the project lives from week to week.

If the factors are absolutely immitigable, then these must be factored into the contract and the original plan. Example: doing field survey in monsoon or severe winter. Though uncommon, I’ve seen projects executed in a start – stop mode these factors permitting. Another interesting example is where a more fundamental change in approach was needed as a solution to insurmountable problems – given the difficulties in freezing requirements with users in advance despite best efforts, the very methodology of developing software has undergone a transformation now using agile methods for success.

Barring those outliers (usually the R & D kind), can project outcomes also be as certain as taxes and death? We may not be there yet, but it certainly helps a long way if project management does not let reasons override results.

End

Read Full Post »

Scene II:

Anon Presentation

The main accomplishment in Scene I was to wean the End User (EU) away from ‘reports and formats’ and get him to talk about the performance defining parameters and have the application compute them for him.

So when they assembled again a few days later, the Business Analyst (BA) and the End User (EU) had a ‘cheshire’ grin.

The report designed this time had all the right columns and filters for selection. Additionally Fuel Efficiency was computed and reported at the bottom.

The Consultant (C) checked if they agreed on how Fuel Efficiency was computed. While the definition was simple – the ratio of kilometers run upon fuel consumed –in reality the method for computing it was a little tricky and at best approximate. It was important to ensure this was understood clearly and unambiguously. The kilometers run had to be marked from one tank-fill to another and the efficiency computed over so many tank-fills. The period of computation was not delimited by a day or a week or any other time period. Over many tank-fills, the computation would have made little difference if it was delimited by tank-fills or by time-period, but not when the tank-fills were only a few in a week.  Also it was not always a full tank-fill. Sometimes they went in for a fill on sighting a filling station though the tank was not empty yet. This meant the amount of fuel filled had to be additionally captured and it could not be assumed always to be the capacity of the tank.

To their credit, this was clearly set out by the EU and well understood by the BA. No issues there.

‘Now, what do you do with this magic number on Fuel Efficiency?’ C asked the EU.

‘Well, I now I know if I have a problem or not.’

‘know’ was the proverbial red-rag to the C.

‘How do you know? Let me put it differently – how do you defend this number to your boss?’

‘I look at this number and look at the type of roads covered.And I know if it’s right or not.’

‘How does it work?’

‘It all depends – if the kilometers were run on highways, I expect a higher efficiency than if it were within a city. Similarly, if the vehicle is on a productive run, it is usually at a lower speed and hence at lower efficiency than in transit.’

‘So you look at the number and look at the composition of the run kilometers and take a call?’

‘Yes, that’s right.’

‘Everybody – your boss and the supervisors in the field – they buy your call?’

‘Well…’

‘How about getting the system to apply the ‘judgment’ you presently make?’

‘If it can be done…’

‘All you need to do is to capture the daily break-up of kilometers run under those four heads: Intracity (Production and Transit) and Intercity (Production and Transit).’

‘That’s possible, though it may not be accurate. We can get the vehicle crew to log the daily kilometers in that manner. That’s not too much additional effort for them.’

‘Now, let us get the break-up in and compute the Fuel Efficiency for each of those four categories separately. You’ll then see clearly the performance and the problem if there’s one.’

End of Scene II

Clearly this was more helpful in getting nearer to the problem area. The trick was to ask the question ‘What would you do with the output?’ repeatedly and get as close as possible to the real performance or the problem. And not stop half-way and get the EU to cover the rest in his head.

In many instances the EU is shortchanged in a manner he is not even aware of.  He is required to further process the data given to him. Essentially the output is not directly usable.

It would be interesting to do this simple check on any system – how many of the outputs are directly usable, immediately supporting decisions made? It may reveal pockets of IT inefficiencies, besides throwing up redundancies and inconsistencies in the output.

For reasons of clarity a minor detail was missed out in the above scene: the EU pointed out while a break-up of daily kilometers run is a simple matter, the fuel consumption in the day could not be broken up under those heads. And, hence, Fuel Efficiency could not be computed under the different categories.  For a moment C’s efforts to push for greater proximity to the performance appeared stymied. He suggested: start with reasonable targets for Fuel Efficiency for each of the four heads. For the actual kilometers run over several tank-fills, compute the weighted Fuel Efficiency, applying the targets to these kilometers. Now the weighted target Fuel Efficiency is available for comparison to the actual Fuel Efficiency realized.

End

.

.

Credit: openclipart.com (Anonymous)

Read Full Post »

Scene I:

Anon Reunion

The Consultant (C) was charged with the job of providing an extra level of oversight to the projects under execution. He had called a meeting of the End User (EU) and the Techie doubling up as a Business Analyst (BA) to inquire about the status of the project.

The company operated a fleet of vehicles that traversed the length and breadth of the country. The project was to develop a software application: ‘Daily Fleet Movement (DFM)’. This was conceived as the first of the several modules they needed to operate and manage the fleet.

The BA reported on the status: The EU and he had agreed on a set of reports – the primary output of the system (screen based or in print) to be generated on the vehicle and the driver with facilities for filtering on dates, towns, etc. He further stressed, in C’s presence, on the finality of the report content and formats arrived at after lengthy iterations. This, he believed, was necessary especially in view of an earlier experience where the project dragged on inordinately with changes to the output coming from the EU right up to the final stages of the project. The solemnity that BA was imposing on the occasion made the EU nervous about what he was signing off. So he had questions and concerns on what he would get to see from the application and if the same debilitating ‘holes’ and the painful iterations of the earlier experience would recur this time too.

While this discussion on the formats and the flexibility in retrieval was talked about, C jumped in with a question for the EU:

‘Well, you certainly need these reports and you’ll get them.  But I’ve a concern.’

Both EU and BA stopped in their tracks and looked at C.

‘I’m sure you’re tracking and managing the operations on the basis of a few parameters?’

‘Most certainly so, how else would one go about?’ The EU didn’t say it, his body spoke.

‘How come these don’t get mentioned in your discussion?’

‘Not right. You heard us talk about the ‘Vehicle Usage Report’, the ‘Fuel Efficiency Report’

‘Do you realize you’re asking for Vehicle Usage Report and our friend here is giving you a big daily log of which vehicle plied where? Exactly what you’re asking for. While the name of the report is comforting, what would you do with it?’

‘What’s wrong with it? I’ve always got one compiled. I can find out, for instance, how many kilometers did a vehicle cover in a day.’

‘So you’ll find out, I’m suresomehow from this log. Though I don’t know how. Now don’t you want the software to compute and report the same for your ready use instead of you ‘finding out’?’

C turned to the BA: ‘Just as I suspected. More often than not, the output generated by an application stops short of what a EU must have. And the EU fills up the gap by some means, sometimes even erroneously, watering down the benefits of automation. He doesn’t know to ask. If that’s not short-changing the EU…’

And to the EU: ‘The few parameters that you need for tracking and managing the operations are called Key Performance Indicators (KPI’s)’

Again the look of ‘What’s wrong with him?’

‘My submission is: You tell the BA you need these KPI’s to be computed and reported. Let him start from there and figure out how they’re computed and how could they be presented for effective communication. You don’t tell him: ‘These are the reports I need, here are the formats, now can you get on with it? And you don’t ‘find out’

The BA and the EU agreed to take up one KPI – Fuel Efficiency – and adopt this approach to design the report afresh from first principles.

End of Scene I

Not to be laughed off. Many sessions of requirements gathering proceed along the above lines, especially in smaller and not-so-IT-savvy shops. Two common reasons: a) The EU is very assertive and/or b) The BA lacks the necessary skills to set the right start for the discussion and take it to conclusion. It is a misconception that a techie or a UX designer with his wireframes is adequate to tease out the business requirements.

So what we have nett nett is the patient telling the doctor: ‘I know what ails me, Doc, give me these pills.’

The Scene II gets even more interesting when they meet again to apprise C on the output they had designed to report on Fuel Efficiency. Once the approach was clear, now arriving at a design was a pretty straight forward exercise.  Right?

Please wait for Scene ii to appear where C continues his review of the design presented to him, making a point or two of far-reaching impact.

End

Read Full Post »

One of the biggest challenges in building software and systems and least appreciated is about drawing the line on what features are in and what are not. Whenever you catch the smell of feature creep, call for this modern parable. Or even better, in the project kickoff held right at the outset when expectations, success factors and scope are discussed, it may be a good idea to take your audience to this story during coffee-break:


Once upon a time, in a kingdom not far from here, a king summoned two of his advisors for a test. He showed them both a shiny metal box with two slots in the top, a control knob, and a lever. “What do you think this is?”

One advisor, an engineer, answered first. “It is a toaster,” he said.

The king asked, “How would you design an embedded computer for it?”

The engineer replied, “Using a four-bit microcontroller, I would write a simple program that reads the darkness knob and quantizes its position to one of 16 shades of darkness, from snow white to coal black. The program would use that darkness level as the index to a 16-element table of initial timer values. Then it would turn on the heating elements and start the timer with the initial value selected from the table. At the end of the time delay, it would turn off the heat and pop up the toast. Come back next week, and I’ll show you a working prototype.”

The second advisor, an IT Analyst, immediately recognized the danger of such short-sighted thinking. He said, “Toasters don’t just turn bread into toast, they are also used to warm frozen waffles. What you see before you is really a breakfast food cooker. As the subjects of your kingdom become more sophisticated, they will demand more capabilities. They will need a breakfast food cooker that can also cook sausage, fry bacon, and make scrambled eggs. A toaster that only makes toast will soon be obsolete. If we don’t look to the future, we will have to completely redesign the toaster in just a few years.”

“With this in mind, we can formulate a more intelligent solution to the problem. First, create a class of breakfast foods. Specialize this class into subclasses: grains, pork, and poultry. The specialization process should be repeated with grains divided into toast, muffins, pancakes, and waffles; pork divided into sausage, links, and bacon; and poultry divided into scrambled eggs, hard- boiled eggs, poached eggs, fried eggs, and various omelet classes.”

“The ham and cheese omelet class is worth special attention because it must inherit characteristics from the pork, dairy, and poultry classes. Thus, we see that the problem cannot be properly solved without multiple inheritance. At run time, the program must create the proper object and send a message to the object that says, ‘Cook yourself.’ The semantics of this message depend, of course, on the kind of object, so they have a different meaning to a piece of toast than to scrambled eggs.”

“Reviewing the process so far, we see that the analysis phase has revealed that the primary requirement is to cook any kind of breakfast food. In the design phase, we have discovered some derived requirements. Specifically, we need an object-oriented language with multiple inheritance. Of course, users don’t want the eggs to get cold while the bacon is frying, so concurrent processing is required, too.”

“We must not forget the user interface. The lever that lowers the food lacks versatility, and the darkness knob is confusing. Users won’t buy the product unless it has a user-friendly, graphical interface. When the breakfast cooker is plugged in, users should see a cowboy boot on the screen. Users click on it, and the message ‘Booting UNIX v.8.3’ appears on the screen. (UNIX 8.3 should be out by the time the product gets to the market.) Users can pull down a menu and click on the foods they want to cook.”

“Having made the wise decision of specifying the software first in the design phase, all that remains is to pick an adequate hardware platform for the implementation phase. An Intel 80386 with 8 MB of memory, a 30 MB hard disk, and a VGA monitor should be sufficient. If you select a multitasking, object oriented language that supports multiple inheritance and has a built-in GUI, writing the program will be a snap. (Imagine the difficulty we would have had if we had foolishly allowed a hardware-first design strategy to lock us into a four-bit microcontroller!).”

The king wisely had the IT Analyst beheaded, and they all lived happily ever after.

End
.
Credit: Unknown Usenet source (edited), wackywits.com, openclipart.com (seanujones) and public-domain-photos.com.

Read Full Post »

 

The business tsunami must have already hit the shores of IT service providers by now. In peace times it is always useful to understand how well is our customer doing in his market place, his points of success and vulnerabilities, and add high-impact value to our services. However, in war times such as this, the universal common-sense trend is to fold inwards on costs and productivity. Initiatives to help our customer along this direction would go a long way to forge a stronger relationship. It would not be a smart idea to do nothing maintaining status-quo or get our strategies mixed up.  

 

If we have been servicing our customer for sometime now, it shouldn’t be too difficult for us to see where these opportunities lie. It’s time to make those moves if it is not already too late.

 

Cutting down on waste is an effective strategy to reduce costs and boost efficiency. There is an excellent article by Bill Kastle in iSixSigma magazine (1) on how to cut waste in Financial Services. This post distills some experiences in IT services industry. Here we go:

 

Waste 1: Over-Communication

 

In a production model where the software development involves the long chain of end-users, user-specifier, analyst, designer, coder, tester, user-acceptor, there is communication at every step. A number of system models are in use to support this communication with clarity and efficiency, depending on what is communicated.

 

IT services are often rendered using a model of a mix of onsite and offshore resources.  In some cases, when there are no onsite resources deployed by the offshore service provider, the offshore team directly interacts with the customer’s project manager. In either case, effective communication between the two geography-separated teams is absolutely essential for the success of service delivery and holding regular telecons is a well established practice. A telecon is lot more efficient and time-crunching than exchanging multiple rounds of lengthy emails and much greater clarity is obtained on each other’s views.

 

On the negative, these telecons go on for long hours at inconvenient schedules (due to time differences) for the customer while soaking up a good amount of most productive morning time of the offshore team and is often not sharply focused. Also some more time is consumed in assembling before and returning to desk after the telecon.

 

Assuming there is a diktat that from tomorrow a telecon that lasted for one hour would be limited to 45 minutes by an automatic timer, how would I, as the host initiating the telecom, manage? Some simple steps, really:

 

– I would plan the structure of the telecon by listing the items of priority naming participants and collecting the background material. 

– Communicate the plan and the relevant material to all relevant participants.

– With this prior preparation, tightly orchestrate the call, keeping to the subject on hand and limiting the participation only to those who need to be at each step.

 

If a project of 5 team-members holds 3 hour-long telecons in a week with 3 participants per telecon, a reduction in its duration by 15 minutes yields 1%+ savings of productive hours, not to speak of the customer going to bed 15 minutes earlier and reduced telephone bills!

 

There are times one has to be innovative beyond achieving this basic efficiency in communication. On one occasion, a customer brought up his tale of woe: he had to have these daily telecons with the offshore team that kept him awake late into the nights and sapped his energies. He was building an application, with the help of an offshore team, for managing documents of legal cases and the associated workflow in a law firm. There was an enormous variety in those cases and also in the following workflow. And it took him a lot of time to describe these requirements to the analyst on this side who was trying hard to make sense out of the talk (nevertheless, a vast improvement over the earlier arrangement where he had to make the techies understand him). The customer candidly admitted that he usually missed out a few points in the welter of details discussed, which with great fidelity were missed out as well in the software delivered. And the QA marked them later as defects in the software.

 

Does the above tale of woe point to some ways of reducing the burden of communication and also incidentally reduce defects?

 

Yes, there is at least a couple that could improve matters.

 

Firstly, even in a telecon, clearly a more structured way of articulating the requirements could be deployed. For instance, if screens are used mainly to specify user activities, the requirements may be tightly sequenced as: a) for each field, description (syntax and semantics), default value, display control, filed-level validation b) screen-level (or inter-field) validation and computation and c) for each event (not covered under a) and b) its description, and processing. Templating the requirements thus cuts out free-style (and wasteful) communication and also safe-guards against omissions and lack of clarity very early in the SDLC. This holds regardless of development paradigm: waterfall, agile, etc.

 

This is in a way no different from the traditional method of systematically capturing the user needs in a comprehensive requirements document, instead of catching them on the run as is wont now.

 

As an aside, this is also a potential source for quality problems because the testing team, by not being part of these telecons, is never first-hand aware of the requirements or changes agreed with the customer during these telecons. This disconnect shows up as inadequate coverage in testing or, worse, wrong test cases. A mitigating practice is to at least capture crisply the highlights of the telecom in a follow-up shared communication.

 

Secondly, a familiarity if not knowledge of the application domain significantly helps in reducing the communication burden in regard to understanding requirements. While the luxury of a domain expert may not be available on most occasions, alternatively a short capsule of training program in the problem domain for the analyst greatly facilitates faster comprehension of the requirements. This compromise approach is vastly under-rated or untried. In practice, it has always significantly speeded up capturing of the requirements more completely and clearly.

 

Another aid to easier understanding of requirements: In many projects, short or long, developing a glossary of entities, rules, validation, exceptions, etc. capturing the semantics of the application supported by cross-references is a very useful aid for the same purpose of easy comprehension and also for conserving application knowledge gained over time, obviating the need for verbose documentation. Going beyond the requirements gathering phase, a good glossary is an effective communication tool in all phases of a project. And the nice thing about it is that it could evolve incrementally starting from scratch as more facts are uncovered. This, in practice, has not received the attention it should.  

 

Sometimes the telecon consciously moves away from tasking or problem solving or reviewing mode into a rambling mode merely to establish or strengthen the rapport between the teams on both sides. Or, junior members are called in to watch and learn how to interact with customers. These are planned departures from the course of efficiencies espoused herein.

 

We have looked at only one kind of communication – telecon, and how to bring efficiency and quality therein. The structure of the telecon would depend on its purpose and would evolve over a few sessions. Where the telecon is to explain user requirements, we also saw some approaches to prepare ourselves better in reducing the burden of communication and achieve enhanced quality too. The processes of communication in IT services are pervasive and substantial in a project with enough opportunities to cut out waste at every step and to boost quality.

 

Can we shorten the long service chain of end-users to user-specifier to many in the IT service provider organization and back to user-acceptor? Also can we pare the written-down documentation to a minimum?  These steps should in principle reduce the over-all load of communication. That’s a subject for another day, with implications much beyond communication load.

 

Enriching views and experiences from you, in concurrence or contrary, are most welcome.  

 

To continue on other themes for reducing waste…

 

Ref: (1) http://finance.isixsigma.com/library/content/c040324a.asp

Read Full Post »

This morning I received this ‘forward’ from my professor! Had to share it with all!

A new vacuum cleaner salesman knocked on the door of the first house on the street. A tall lady opens the door.

Before she could speak, the enthusiastic salesman barged into her living room and opened his big black plastic bag and poured out all the cow droppings onto the carpet.

“Madam, if I can’t clean this up with this new powerful Vacuum cleaner in the next 10 minutes, I will EAT all this dung!” exclaimed the eager salesman.

“Would you like chilly sauce or ketchup with that” asked the lady.

The bewildered salesman asked, “Why, madam??”

“There’s no electricity in the house…” said the lady.

MORAL: Gather all requirements and resources before working on any project and committing to the client…!!!

(With due acknowldgement to the unknown original author, thanks to Prof H B Kekre for the ‘forward’)

(more…)

Read Full Post »

 

In these times, any organization responds by tightening its belt, by putting those new projects on back-burner. In its place, usually a number of quick-yielding initiatives are launched that are limited in scope, focused on results and often cut across functional silos. More often than not, these initiatives go below the IT radar and their roll-out has little IT support. In these times, this is a great opportunity for IT to step forward and support these initiatives effectively.

 

Let me present one such example of how IT made itself quite useful.

 

The organization is in the business of executing (fixed priced as well as time-and-material) software projects for its customers by deploying its software professionals on the rolls. It had a top-heavy structure. For making it more even-keeled and to reduce over-all costs of operation, a decision was taken to hire fresh-from-campus trainees (CT) in small numbers, induct them with adequate training and then staff them in projects. Intuitively it made a lot of sense to have some number of fresher recruits; they were skilled in contemporary technologies, they were high-energy’ed and performance driven.

 

The routiine HR reports did not have much to say especially about these trainees, whether the scheme was working and with what efficacy. Of course, it did show the cost per employee-month dropped somewhat in the monthly payroll. But what about the impact on the business?

 

For starters, IT decided to separately tag various batches of trainees inducted into the organization so that their performance could be tracked. There were two other models of hiring which were also concurrently in play. Some business-experienced, but not technology-ready professionals were taken in a Hire-Build-Deploy (HBD) model. These guys were sent out for intensive specialized training and brought back into the organization. There were also Lateral Hires (LH) who had some little experience and were a little ahead of the fresh-trainees in the learning curve. These lateral hires, unlike CT and HBD hires, were hired at any time of the year based on needs and not brought in batches; they were also not trained like the CT and HBD batches. These hires were tagged on a yearly bucket (example: 07 LH were guys laterally hired in the year 2007). So there were three different models and several batches of trainees/hires under these models that were uniquely tagged.

 

Now, IT created a few simple business-relevant views (and values) on these batches:

– How many months of billing each batch generated on an average in the first year and in their   second year in the organization (the tracking was limited to the first two years in the organization)?

How quickly did these hires become billable?

What kind of billing rates these hires realized?

Which Projects absorbed most trainees?…

 

 

Though the performance of the hires with regard to billing was not entirely in their own hands, nevertheless some useful pointers were obtained for the business. Expectedly LH did the best in over-all performance, followed by HBD and CT in that order. What was not expected was a detail: a LH candidate with one year total experience did better than an equivalent CT candidate with same one year experience. LH candidate perhaps showed the implicit advantage of the hiring process that, with prior wisdom, successfully matched the candidate’s profile to the demands of an available billing position.

 

Year-on-Year performance comparison of batches validated: a) the selection process was getting better in specifying and assessing skills and b) various improvements brought about in the induction training was paying off; and, it also pointed to available opportunities for doing even better. Projects that absorbed good amount of these hires presented both an opportunity as well as a risk of diluting quality factors for these customers, if overdone mindlessly. Clearly, the opportunity is for cutting back on the employee costs in the Project for the organization and passing on at least in part the gains to its customer too; and more importantly, how this practice could be replicated in other laggard Projects?    

 

To cut the long story short, these views were useful to the business to figure out a) if an initiative works for the organization b) which good practices need to be intensified and c) which practices need to be re-examined for better results. The prevailing enterprise applications do not support these over-night initiatives as well as required.

 

If only IT remains connected with these initiatives undertaken from time-to-time by an organization, there are plenty of opportunities for making a difference to the operations in terms of providing actionable insights and value. Its cross-functional vision enables it to support these initiatives uniquely and quite effectively. Of course, it requires IT to actively scan and sense these possibilities and step forward unbidden to offer active support. The opportunities may not come their way cut and dried and laid out neatly on a plate.

 

All of these apply even during ‘peace’ times.

 

This way, IT is in Business, good times or not!

Read Full Post »

 

A question I pop up often at software professionals is how do you evaluate a OO design. We assume presently functional completeness of the design is not in question.   The responses are interesting and various. They usually circle around: How well encapsulation, polymorphisms…are implemented in the design. How well reusability is used…. And some get into OO metrics.

 

I rarely get countered that the question is a wide open one; there are several aspects (some 20 plus non-functional attributes) to a design and which one do I have in mind for evaluating a design. After all design is a model for realizing both functional and non-functional user requirements.

 

If I were asked to be more specific on what is my chief concern in regard to design, I would then say it is the basic ability of the software to take in changes to its functionality over time. Changes to the functionality implemented in software are inevitable owing to the way an organization responds to internal and environmental shifts. With some software, these changes are easier to make and in some, it is gut-wrenching.   And, today, a good part of any IT (non-Capex) budget is spent on getting its software to change in step with the business needs.

 

So the concern over the software design being able to take changes in its stride is legitimate and important enough to say: the design that permits changes to be made more readily with less effort is a better design. Is this all about the usual non-functional attribute of ‘maintainability’? May be, in parts. I would like to think about it more as a legitimate evolution of the software while ‘maintenance’ connotes status quo. And today, the pace of this evolution has quickened even in ‘stable’ businesses.

 

Now let us proceed to figure out what possibly could be the criterion for evaluating the design from this perspective. This could also be turned on its head to ask how does one design that readily accommodates changes.

 

OO is already touted as a paradigm which is well suited to handle changes. Why? Because of its concepts such as encapsulation, inheritance, interface mechanism (?), etc. are suited to cope up with changes. Obviously, whichever design uses these features heavily, as shown by appropriate metrics or otherwise, is the way to go? 

 

This misses a crucial point. The initial functional requirements demand a set of abstractions. The design is best done by recognizing these abstractions and aligning its abstractions with the same. This is the true purport of all those OO guides that tell us how to identify candidate classes by listing out nouns from the problem description… If this is done as it should be, the initial alignment is ensured. This still does not guarantee the design as capable of coping up with changes to come.

 

The same principle applies to changes. Changes also demand a set of abstractions in the areas of change if they need to be handled later with minimal effort. A design that also aligns its abstractions with those in the areas of change is the one that truly delivers the promise of OO paradigm.

 

So the key to good design seem to lie outside of design phase! It is in the phase of assessing requirements; and, importantly, how these requirements would change in the foreseeable future. While we do a good job of the former, the latter has no place in our practice as yet! Not aware if formal methodologies for gathering and modeling requirements call for attention to this aspect. Is there a section distinctly devoted in the requirements document to foreseeable evolutionary changes? Not in 9+ cases out of 10. Not a wonder our systems are not well equipped to adapt to flow of time?

 

The software development community could come up with: “How can we foresee changes to come? If we could, we would provide for it from go.” This is strictly not true in all cases. It is not too difficult to figure out with the users which parts of the business processes are apt to change, if only we bring our questions to the user’s table specially targeting the future. Some are obvious in the trade and these are well taken care of even now.

 

Examples:

 

Tax laws: These could change from time to time.

 

Sales-person’s incentives or commission: The scheme for incentivising sales-persons changes from time to time even mid-year depending on the business objectives. In a healthy quarter, getting new clients may be important and in a sluggish quarter, mining current accounts may be the priority. Clearly the scheme needs to be abstracted.  

 

However, plans to open a new office, to start a new distribution channel, to introduce new pricing policy or new service offerings, to acquire a company…may not be uncovered in a routine study of requirements, the focus being on the present. Only a targeted probing with users may bring out these and other possible change triggers.  A word of caution is: the average user we engage with may not be wise to some of these plans!

 

In summary, a formal and focused business volatility analysis could be carried out with users at different levels of the organizational hierarchy so that the abstractions required by the business now and in future (to the foreseeable extent) are identified and the design abstractions are appropriately set up. The design abstractions could range form simple parameterization to more refined OO and variability techniques. The mode of deploying the changes also influences the choice of design technique.  

 

In fact it is a good idea to include a discussion on how the design would be impacted by anticipated and unanticipated changes in the user requirements: would the design abstractions take them in their stride elegantly or would it cause major upheavals. One recalls how in Operations Research, the algorithms provide for sensitivity analysis to figure out the impact on the computed solution if certain conditions were to change. Incidentally an earlier ‘Change Management’ post talks about the sensitivity of effort estimates to changes in user requirements.  

 

Is this a non-issue with packaged solutions like ERP? No, it is still an issue, perhaps to a lesser degree. Configuring a ERP solution for the current business practice is not a trivial effort. And when there are changes to current practice, reflecting these changes could turn out to be a minor or a significant effort depending on the degrees of freedom in the initial lay-out. For instance, consider organizations that frequently reorganize their operations – divisions and departments merge and split, get centralized and decentralized…The ERP could be elegantly re-configured for all these changes or it could be a snake’s pit depending on how it was set up initially.     

 

As an aside, abstractions in the requirements gathering phase may also be necessitated for an entirely different reason – the involved users may not be clear or articulate about their needs at that point of time or the scenario is in some kind of flux. These may get fleshed out later. Design abstractions must be able to cope up with these too. 

 

All along the software architects and the designers were required to think of abstractions. Now are we wanting our Business Analysts also to get into the groove? Yes, that’s the drift. 

 

How do we build systems for businesses which are intrinsically very volatile? Will look at it in a post to follow.

Read Full Post »

 

(contd.)

 

Before we get to the core, I must mention about an interesting solution that kind of addresses concerns voiced about mixing up business logic in Processing Reports. Recently I came across a case study where a MNC had the usual problem of reporting from a variety of dispersed and disparate sources such as Excel sheets, legacy ERP systems, etc. The reports were quite complex. And, there was no single container to host all the processing logic. The Organization deployed an ETL software that fitted the bill and loaded the data into a Reporting Database! This SQL database was used purely for reporting purposes only. On this database, they also built their business logic as a uniform interface and pulled out their reports. This architecture certainly fixed the problem of scattered business logic. The data is still transactional and the database was not a data-warehouse kind.

 

Even when there is a container application like an ERP instance to host the business logic and the reports, this solution may become an alternative meriting serious consideration when the reports are quite complex. The downsides to this approach are: a) introduction of an intermediate step and possible time delays b) the output business logic is still separated from transaction related business logic c) views may be generated from the ERP instance or from the Reporting Database – the challenge of making them look alike and d) the reports have to be coded explicitly instead of using the ERP-native report-generator. And, all the associated maintenance issues.

 

How would I, as an IT professional (not as a management consultant), go about if I need to rationalize the output system of reports (and views) for maximum business impact? An exercise that may be applied to a system of reports that exists already or to reports that are being planned for a new application. While some steps are obvious, some are not. The obvious steps (especially in a IT-mature organization) are included here for completeness:

 

         Compile a list of reports that need to be subjected to this exercise of rationalization.

 

         Develop the business purpose of each report. Weed out duplicate ways of stating the same purpose. Qualifiers are useful in generating variants of a business purpose: Shows payment-pending invoices by Region, Shows payment-pending invoices by Office, Shows payment-pending invoices by Customer, Shows payment-pending invoices by Product-line, and Shows payment-pending invoices by Period, etc. 

 

         One may or may not have the option of meaningfully (re)naming the reports pointing to their purpose.  

 

         Do a preliminary check if the information content of the report supports the business purpose. The depth of this check depends on IT professional’s knowledge of the business and best practices in the domain. 

 

         Generate a reference matrix showing reports and their users. These users are grouped under their functional silos: Finance/Accounts-Receivables, HR/Payroll, etc.

 

         Classify the users for each report: He may be a ‘direct’ or ‘responsible’ user using the report for managing his operations; Or a ‘supervisory’ or ‘accountable’ user using the report to review the operations with his team. An ‘informational’ user is merely informed by the report. This simple classification is adequate for most purposes.

 

         Revisit each report with its direct and the supervisory users. Validate the business purpose, the information content and the format – the format aspect of a report, though quite important, is not pursued further in this blog. There are some interesting and powerful opportunities at this step to restore true value: a) Check if the report is directly used by the user as such or if he does further processing to make it usable for its intended purpose. Very often, it is found, user makes some additional massaging to the numbers in the report: a missing summary or computing a ratio or a KPI, a comparison with a past period, etc. A significant efficiency contribution would to be to cut out this massaging b) More complex massaging is usually carried out in Excel. Can this be done away with or at least seamlessly integrated? c) This is an opportunity to ‘hard’ reconcile the supervisory perspective of a business aspect with the direct operational perspective. A no-brainer simplification is to ensure the Transaction Report goes to operating personnel and the related Summary Report goes to supervisory personnel and d) Review the list of ‘informational’ users of this report and reasons for their inclusion or exclusion. Mark candidates for inclusion/exclusion.

 

         These done, take the discussion to the broad plane of user’s responsibilities how the reports support those responsibilities. This would reveal those ‘missing’ views and reports – potential for creating value. It is not unusual to find system outputs not covering the entire breadth of user’s responsibilities or his KPI’s.

 

         Review with each informational user, the list of reports he receives and his thoughts on inclusions and exclusions. Go back to the direct and supervisory users of the reports to finalize the ‘informational’ inclusions and exclusions. At this point, the report may even undergo some changes to suit the needs of the informational users or some missing reports may again be uncovered.

 

         Note that a report with multiple ‘responsible’ users especially from different functional silos strongly indicates multiple business purposes stated or omitted.  And a report with multiple purposes is a strong candidate for splitting.

 

         Multiple reports for same or related purposes are good candidates for merging. When the business purpose is quite specific (not generic like ‘highlights cost-overruns’) their distribution lists could still be different if they present different perspectives. Do they?   

 

         Develop an exhaustive list of abnormal events that could occur in each functional silo and across silos. Relate each event to the Exception Report that shows it up. This may reveal events with potentially serious consequences being passed through. It is also important to check a) if these events are pointed to at the earliest possible instant after their occurrence b) the reporting intensity is commensurate with the degree of abnormality and c) recipients of the reports include all users concerned with the origin of the events and the organization’s consequent responses. Without sufficient care here, process breaks could severely impair the organization’s ability to respond.

 

         A report-type view of the system of reports also throws up useful but gross pointers to some imbalances. Absence of Log Reports may readily indicate holes in statutory compliance or weaknesses in security audit procedures and in some cases even recovery capabilities. Few Exception Reports may point to, as we have already seen, a failure to flag down significant abnormal events in the operations and the ability to quickly respond. Are the operating and supervisory personnel adequately (not overly) serviced, covering their major responsibilities and accountabilities with Transaction, Processing and Summary Reports? Similarly, are business purposes adequately and powerfully supported? Are functional silos bridged optimally?

 

It would be interesting to see if some principles of project portfolio management could be carried into this exercise of rationalizing system outputs. 

 

Like we have rigor in design of database (ERD, normalization…), this appears to be a ripe candidate for proposing a formal model and practice both for design and importantly for ongoing review.  

 

In summary, rationalizing the system outputs has ready pay-back in terms of managerial effectiveness by: a) re-engineering the outputs for maximum out business impact and operational efficiency b) weeding out redundancies in outputs as well as their distribution c) discovering opportunities for filling gaps and creating value for the business and d) making up for debilitating process breaks.

 

Importantly, note that IT application boundaries, their technology platforms or deployment architecture do not pose any problems in carrying out this exercise. Since change is constant with businesses, this cathartic effort is not likely to be single-shot.  

 

A potential service offering of real value from CIO’s stable? It has a quick turn-around and for most part may not need face-to-face or travel.

 

(concluded)

 

Read Full Post »

 

Reports and online views are organized presentation of information for ready comprehension and decision making. They form a major part of usable outputs of IT systems as the basis for managing the operations in an enterprise.  Yet, these outputs taken as a whole or individually are not subject to any kind of design rigor, except for their formats! Targeting this concern, this and a following blog introduce some basic concepts and build simple practices towards optimally designing this system of outputs. 

 

Today, reports are now viewable online and views are printable offline. Dashboards are a special kind of views that use graphic metaphors instead of rows and columns. The discussion here refers to reports and is equally applicable to other forms of ouputs. And the principles and practices outlined apply to reports that are planned ground up and developed for use and not to those reports that are designed and retrieved totally on-the-fly with a query-report engine or an analytics engine.

 

These ‘canned’ reports build up to a sizeable number in any application and have an abiding tendency to multiply weed-like much beyond the original plans. One only has to look at any ERP roll-out to see it in real, though this dangerous ‘disease’ is not limited to ERP solutions alone. Why is it a ‘disease’ and dangerous at that? Multiply the number of reports by number of recipient users and total them up to get total number of instances of these reports perused. Now multiply the number of instances of perused reports by 10 minutes (or some other number, less or more) which could be the average time any user spends with a report instance. This is the (crudely estimated) amount of time, possibly of the senior management, soaked up by these reports. Individually may not be very significant, but could collectively be quite substantial. In fact, it is simple to paralyze an organization without setting off alarms in any quarters – all that needs to be done is to ‘helpfully’ over-provision users in different parts of the organization with any number of reports!

 

The obvious remedy, common sense tells us, is to strongly question the need for any report and remove the redundancies. Before we look at the remedy more closely, let us look at what are these reports like and what are they generally used for:

 

a) Dump or Log or Scroll reports: these are records of every transaction processed by the application. There may be additional records showing the trail of events before and after the transactions. These reports are mainly used for statutory reasons, audit purposes, as historical archive and, sometimes, for information recovery (When the primary purpose is information recovery, the Dump may not be human-readable and is usually processed by a system utility. It is no longer considered as a report).   

 

b) Transaction Reports: these are reports of transactions filtered by some selection criteria, sorted, summed up and formatted. Prior period data may be included for comparison. These reports are of informative kind: which product sold how much in which region, which parts were supplied by a vendor, which orders were processed by a machine shop, etc. A drill-down facility may be available to track down the details of a transaction across functional silos. Usually these reports do not process the data beyond reporting them as such, except for some totaling or calculating percentages.  Useful for managers to monitor operations under their supervision. 

 

c) Summary Reports: these reports abstract out the transactions and focus more on various kinds of summaries of transactions. Of course, the drill-down may show underlying transactions. These reports are used by senior managers to monitor the performance of their areas of operation at an aggregate level. Dashboards could be placed under this type.

 

d) Processing Reports: these reports, as the name implies, may include significant amount of processing on underlying data. This processing is distinct from merely crunching the data for presentation by way of charts and graphs. Senior managers may use these reports to look at scenarios that are not intrinsically modeled in the enterprise applications. A typical example is to pull out raw data and apply some adjustment rules and produce final adjusted numbers. The downside to these reports is the danger of mixing up processing with presentation. In this way, processing is fragmented and is not standard across reports, leading to problems of reconciling different reports that work on the same data. For example, two reports on resource utilization may differ depending on how they process the utilization data. One may round off to nearest weeks and the other may process the data as is, in terms of days, without any round-off.

 

Often in ERP rollouts, loading good amount of processing logic into reports is a common practice, fearing the formidable alternative of customizing the ERP.

 

It is another matter that when the enterprise model is complex as with ERP solutions, reports (not limited to processing reports) may differ simply on where they pull their data from (ignoring for a moment differences in processing the data as mentioned above) and enormous efforts are wasted on reconciling the different reports. Going back to the example of reporting on utilization of human resources, the report pulling data from the HR function would not easily match with the report pulling data from the Projects function.

 

e) Exception Reports: these reports, different from alerts, draw the attention of operating personnel and managers especially to deviations from the established operating norms. It is easy to envisage exception reports in every aspect of operations. Example: A report recommending replenishment of specific stock items.  

 

And some of them are not directly related to the operations. For instance, exception reporting is very effective in spotlighting data validation and integrity errors for subsequent data correction. Security aspects like attempted security breaches are usually reported as exceptions.

 

The above taxonomy of reports is sufficient for the purpose of discussion here even if it is not all-inclusive. The report types are not mutually exclusive. A report on ageing of customer’s pending bill payments could be first considered as an exception report in as far as it is highlighting abnormal situation for follow-up. It may also qualify as a Summary Report. The function overrides the form.

 

Reports usually push for some organizational responses. Transaction and Summary Reports focus on performance of one or more entities and their interplay and provide the basis for some broad-based follow-up actions. Exception Reports provoke pointed actions to handle specific deviations. Dump Reports do not trigger any immediate response.     

 

With this background, we are ready to go back to the ‘disease’ and the common-sense remedy we talked about earlier.

 

At this point, it is more interesting to look at reports or views, taken as a whole or individually, in an enlarged perspective of how aligned they are to the business and not merely for the purposes of curbing the excesses. The impact of closely aligning the outputs to the needs of the business  would be positively beneficial, given that the organization depends mainly on these reports and views for life-signs and to manage its head to tail.

 

As mentioned at the outset, surprisingly, from a software engineering (or is it information engineering?) perspective, this important piece of an organization’s information systems has not been subject to much design rigor, formal or otherwise to optimally arrange for business alignment.

 

Will set off on this un-rutted path in a soon-to-be blog.

Read Full Post »