Feeds:
Posts
Comments

Posts Tagged ‘Functional Requirement’

New Hatton Garden heist image

During a bank heist the Chief told the Sergeant to cover all exits so the robbers could not get away.

Later the Sergeant reports to the chief.

“Sorry sir but they got away.”

The chief very disappointed says, “I told you to cover all exits!”

“I did” replied the Sergeant. “but they got away through the entrance.”

That was in jest.

But in real it is not very different IT/Services sectors.

How often we tell our customers: “Mr. Customer, you never brought this up all this time and now…”

I’m reminded of an incident where a CFO kept on asking for more reports. The PM (Project Manager) in sheer disgust got his manager to rein him in: After the polite handshakes, the manager brought up the issue. He assured the CFO they were not reluctant to give him what he wants. But these would be regarded as chargeable change requests.

The CFO in even tone drew the manager’s attention to a line in the Work-Order.

It said “Account-Receivables”.

“You had claimed you’ve rolled it out at a number of other sites. And dont you know AR implies all these reports?”

End of discussion on scope creep.

When we unleash an untrained/inexperienced business analyst on our customer – these days many a youngster aspires to become one (business analyst, I mean) – he fails to see beyond what the customer says literally. And the customer goes away with the comforting thought the analyst has understood him in whole. The analyst may be likened to the stenographers of yester years operating strictly in the what-you-said-is-what-you-get mode – a great disservice to our customer.

An interesting aside: The image appearing above is one officailly released of the laocker room in the New Hatton Garden heist.  The burglars, it is beileved, first entered at about 9.20pm on 2 April and stayed until 8.05am the next morning, Good Friday. An alarm went off at 12.21am on 3 April, about three hours after the gang entered the vault area, according to timings released by police. The gang returned to the vault on Saturday 4 April at about 10.17pm, staying until 6.30am the next morning. Key staff were off work because of the Easter holiday and police were alerted to the burglary just after 8am on Tuesday 7 April. The alarm was recorded and transferred to the police’s computer-aided dispatch system. “A grade was applied to the call that meant that no police response was deemed to be required,” the statement released by the police said. “An internal investigation is ongoing to identify why this grade was applied to the call in conjunction with the alarm company.”

The final disposition of the investigation is not known. Am sure an interesting story is waiting to be unravelled.

End

 

 

Credits: ajokeaday.com and press for the info on the heist.

Advertisements

Read Full Post »

Scene II:

Anon Presentation

The main accomplishment in Scene I was to wean the End User (EU) away from ‘reports and formats’ and get him to talk about the performance defining parameters and have the application compute them for him.

So when they assembled again a few days later, the Business Analyst (BA) and the End User (EU) had a ‘cheshire’ grin.

The report designed this time had all the right columns and filters for selection. Additionally Fuel Efficiency was computed and reported at the bottom.

The Consultant (C) checked if they agreed on how Fuel Efficiency was computed. While the definition was simple – the ratio of kilometers run upon fuel consumed –in reality the method for computing it was a little tricky and at best approximate. It was important to ensure this was understood clearly and unambiguously. The kilometers run had to be marked from one tank-fill to another and the efficiency computed over so many tank-fills. The period of computation was not delimited by a day or a week or any other time period. Over many tank-fills, the computation would have made little difference if it was delimited by tank-fills or by time-period, but not when the tank-fills were only a few in a week.  Also it was not always a full tank-fill. Sometimes they went in for a fill on sighting a filling station though the tank was not empty yet. This meant the amount of fuel filled had to be additionally captured and it could not be assumed always to be the capacity of the tank.

To their credit, this was clearly set out by the EU and well understood by the BA. No issues there.

‘Now, what do you do with this magic number on Fuel Efficiency?’ C asked the EU.

‘Well, I now I know if I have a problem or not.’

‘know’ was the proverbial red-rag to the C.

‘How do you know? Let me put it differently – how do you defend this number to your boss?’

‘I look at this number and look at the type of roads covered.And I know if it’s right or not.’

‘How does it work?’

‘It all depends – if the kilometers were run on highways, I expect a higher efficiency than if it were within a city. Similarly, if the vehicle is on a productive run, it is usually at a lower speed and hence at lower efficiency than in transit.’

‘So you look at the number and look at the composition of the run kilometers and take a call?’

‘Yes, that’s right.’

‘Everybody – your boss and the supervisors in the field – they buy your call?’

‘Well…’

‘How about getting the system to apply the ‘judgment’ you presently make?’

‘If it can be done…’

‘All you need to do is to capture the daily break-up of kilometers run under those four heads: Intracity (Production and Transit) and Intercity (Production and Transit).’

‘That’s possible, though it may not be accurate. We can get the vehicle crew to log the daily kilometers in that manner. That’s not too much additional effort for them.’

‘Now, let us get the break-up in and compute the Fuel Efficiency for each of those four categories separately. You’ll then see clearly the performance and the problem if there’s one.’

End of Scene II

Clearly this was more helpful in getting nearer to the problem area. The trick was to ask the question ‘What would you do with the output?’ repeatedly and get as close as possible to the real performance or the problem. And not stop half-way and get the EU to cover the rest in his head.

In many instances the EU is shortchanged in a manner he is not even aware of.  He is required to further process the data given to him. Essentially the output is not directly usable.

It would be interesting to do this simple check on any system – how many of the outputs are directly usable, immediately supporting decisions made? It may reveal pockets of IT inefficiencies, besides throwing up redundancies and inconsistencies in the output.

For reasons of clarity a minor detail was missed out in the above scene: the EU pointed out while a break-up of daily kilometers run is a simple matter, the fuel consumption in the day could not be broken up under those heads. And, hence, Fuel Efficiency could not be computed under the different categories.  For a moment C’s efforts to push for greater proximity to the performance appeared stymied. He suggested: start with reasonable targets for Fuel Efficiency for each of the four heads. For the actual kilometers run over several tank-fills, compute the weighted Fuel Efficiency, applying the targets to these kilometers. Now the weighted target Fuel Efficiency is available for comparison to the actual Fuel Efficiency realized.

End

.

.

Credit: openclipart.com (Anonymous)

Read Full Post »

Scene I:

Anon Reunion

The Consultant (C) was charged with the job of providing an extra level of oversight to the projects under execution. He had called a meeting of the End User (EU) and the Techie doubling up as a Business Analyst (BA) to inquire about the status of the project.

The company operated a fleet of vehicles that traversed the length and breadth of the country. The project was to develop a software application: ‘Daily Fleet Movement (DFM)’. This was conceived as the first of the several modules they needed to operate and manage the fleet.

The BA reported on the status: The EU and he had agreed on a set of reports – the primary output of the system (screen based or in print) to be generated on the vehicle and the driver with facilities for filtering on dates, towns, etc. He further stressed, in C’s presence, on the finality of the report content and formats arrived at after lengthy iterations. This, he believed, was necessary especially in view of an earlier experience where the project dragged on inordinately with changes to the output coming from the EU right up to the final stages of the project. The solemnity that BA was imposing on the occasion made the EU nervous about what he was signing off. So he had questions and concerns on what he would get to see from the application and if the same debilitating ‘holes’ and the painful iterations of the earlier experience would recur this time too.

While this discussion on the formats and the flexibility in retrieval was talked about, C jumped in with a question for the EU:

‘Well, you certainly need these reports and you’ll get them.  But I’ve a concern.’

Both EU and BA stopped in their tracks and looked at C.

‘I’m sure you’re tracking and managing the operations on the basis of a few parameters?’

‘Most certainly so, how else would one go about?’ The EU didn’t say it, his body spoke.

‘How come these don’t get mentioned in your discussion?’

‘Not right. You heard us talk about the ‘Vehicle Usage Report’, the ‘Fuel Efficiency Report’

‘Do you realize you’re asking for Vehicle Usage Report and our friend here is giving you a big daily log of which vehicle plied where? Exactly what you’re asking for. While the name of the report is comforting, what would you do with it?’

‘What’s wrong with it? I’ve always got one compiled. I can find out, for instance, how many kilometers did a vehicle cover in a day.’

‘So you’ll find out, I’m suresomehow from this log. Though I don’t know how. Now don’t you want the software to compute and report the same for your ready use instead of you ‘finding out’?’

C turned to the BA: ‘Just as I suspected. More often than not, the output generated by an application stops short of what a EU must have. And the EU fills up the gap by some means, sometimes even erroneously, watering down the benefits of automation. He doesn’t know to ask. If that’s not short-changing the EU…’

And to the EU: ‘The few parameters that you need for tracking and managing the operations are called Key Performance Indicators (KPI’s)’

Again the look of ‘What’s wrong with him?’

‘My submission is: You tell the BA you need these KPI’s to be computed and reported. Let him start from there and figure out how they’re computed and how could they be presented for effective communication. You don’t tell him: ‘These are the reports I need, here are the formats, now can you get on with it? And you don’t ‘find out’

The BA and the EU agreed to take up one KPI – Fuel Efficiency – and adopt this approach to design the report afresh from first principles.

End of Scene I

Not to be laughed off. Many sessions of requirements gathering proceed along the above lines, especially in smaller and not-so-IT-savvy shops. Two common reasons: a) The EU is very assertive and/or b) The BA lacks the necessary skills to set the right start for the discussion and take it to conclusion. It is a misconception that a techie or a UX designer with his wireframes is adequate to tease out the business requirements.

So what we have nett nett is the patient telling the doctor: ‘I know what ails me, Doc, give me these pills.’

The Scene II gets even more interesting when they meet again to apprise C on the output they had designed to report on Fuel Efficiency. Once the approach was clear, now arriving at a design was a pretty straight forward exercise.  Right?

Please wait for Scene ii to appear where C continues his review of the design presented to him, making a point or two of far-reaching impact.

End

Read Full Post »

One of the biggest challenges in building software and systems and least appreciated is about drawing the line on what features are in and what are not. Whenever you catch the smell of feature creep, call for this modern parable. Or even better, in the project kickoff held right at the outset when expectations, success factors and scope are discussed, it may be a good idea to take your audience to this story during coffee-break:


Once upon a time, in a kingdom not far from here, a king summoned two of his advisors for a test. He showed them both a shiny metal box with two slots in the top, a control knob, and a lever. “What do you think this is?”

One advisor, an engineer, answered first. “It is a toaster,” he said.

The king asked, “How would you design an embedded computer for it?”

The engineer replied, “Using a four-bit microcontroller, I would write a simple program that reads the darkness knob and quantizes its position to one of 16 shades of darkness, from snow white to coal black. The program would use that darkness level as the index to a 16-element table of initial timer values. Then it would turn on the heating elements and start the timer with the initial value selected from the table. At the end of the time delay, it would turn off the heat and pop up the toast. Come back next week, and I’ll show you a working prototype.”

The second advisor, an IT Analyst, immediately recognized the danger of such short-sighted thinking. He said, “Toasters don’t just turn bread into toast, they are also used to warm frozen waffles. What you see before you is really a breakfast food cooker. As the subjects of your kingdom become more sophisticated, they will demand more capabilities. They will need a breakfast food cooker that can also cook sausage, fry bacon, and make scrambled eggs. A toaster that only makes toast will soon be obsolete. If we don’t look to the future, we will have to completely redesign the toaster in just a few years.”

“With this in mind, we can formulate a more intelligent solution to the problem. First, create a class of breakfast foods. Specialize this class into subclasses: grains, pork, and poultry. The specialization process should be repeated with grains divided into toast, muffins, pancakes, and waffles; pork divided into sausage, links, and bacon; and poultry divided into scrambled eggs, hard- boiled eggs, poached eggs, fried eggs, and various omelet classes.”

“The ham and cheese omelet class is worth special attention because it must inherit characteristics from the pork, dairy, and poultry classes. Thus, we see that the problem cannot be properly solved without multiple inheritance. At run time, the program must create the proper object and send a message to the object that says, ‘Cook yourself.’ The semantics of this message depend, of course, on the kind of object, so they have a different meaning to a piece of toast than to scrambled eggs.”

“Reviewing the process so far, we see that the analysis phase has revealed that the primary requirement is to cook any kind of breakfast food. In the design phase, we have discovered some derived requirements. Specifically, we need an object-oriented language with multiple inheritance. Of course, users don’t want the eggs to get cold while the bacon is frying, so concurrent processing is required, too.”

“We must not forget the user interface. The lever that lowers the food lacks versatility, and the darkness knob is confusing. Users won’t buy the product unless it has a user-friendly, graphical interface. When the breakfast cooker is plugged in, users should see a cowboy boot on the screen. Users click on it, and the message ‘Booting UNIX v.8.3’ appears on the screen. (UNIX 8.3 should be out by the time the product gets to the market.) Users can pull down a menu and click on the foods they want to cook.”

“Having made the wise decision of specifying the software first in the design phase, all that remains is to pick an adequate hardware platform for the implementation phase. An Intel 80386 with 8 MB of memory, a 30 MB hard disk, and a VGA monitor should be sufficient. If you select a multitasking, object oriented language that supports multiple inheritance and has a built-in GUI, writing the program will be a snap. (Imagine the difficulty we would have had if we had foolishly allowed a hardware-first design strategy to lock us into a four-bit microcontroller!).”

The king wisely had the IT Analyst beheaded, and they all lived happily ever after.

End
.
Credit: Unknown Usenet source (edited), wackywits.com, openclipart.com (seanujones) and public-domain-photos.com.

Read Full Post »

 

(contd.)

 

Before we get to the core, I must mention about an interesting solution that kind of addresses concerns voiced about mixing up business logic in Processing Reports. Recently I came across a case study where a MNC had the usual problem of reporting from a variety of dispersed and disparate sources such as Excel sheets, legacy ERP systems, etc. The reports were quite complex. And, there was no single container to host all the processing logic. The Organization deployed an ETL software that fitted the bill and loaded the data into a Reporting Database! This SQL database was used purely for reporting purposes only. On this database, they also built their business logic as a uniform interface and pulled out their reports. This architecture certainly fixed the problem of scattered business logic. The data is still transactional and the database was not a data-warehouse kind.

 

Even when there is a container application like an ERP instance to host the business logic and the reports, this solution may become an alternative meriting serious consideration when the reports are quite complex. The downsides to this approach are: a) introduction of an intermediate step and possible time delays b) the output business logic is still separated from transaction related business logic c) views may be generated from the ERP instance or from the Reporting Database – the challenge of making them look alike and d) the reports have to be coded explicitly instead of using the ERP-native report-generator. And, all the associated maintenance issues.

 

How would I, as an IT professional (not as a management consultant), go about if I need to rationalize the output system of reports (and views) for maximum business impact? An exercise that may be applied to a system of reports that exists already or to reports that are being planned for a new application. While some steps are obvious, some are not. The obvious steps (especially in a IT-mature organization) are included here for completeness:

 

         Compile a list of reports that need to be subjected to this exercise of rationalization.

 

         Develop the business purpose of each report. Weed out duplicate ways of stating the same purpose. Qualifiers are useful in generating variants of a business purpose: Shows payment-pending invoices by Region, Shows payment-pending invoices by Office, Shows payment-pending invoices by Customer, Shows payment-pending invoices by Product-line, and Shows payment-pending invoices by Period, etc. 

 

         One may or may not have the option of meaningfully (re)naming the reports pointing to their purpose.  

 

         Do a preliminary check if the information content of the report supports the business purpose. The depth of this check depends on IT professional’s knowledge of the business and best practices in the domain. 

 

         Generate a reference matrix showing reports and their users. These users are grouped under their functional silos: Finance/Accounts-Receivables, HR/Payroll, etc.

 

         Classify the users for each report: He may be a ‘direct’ or ‘responsible’ user using the report for managing his operations; Or a ‘supervisory’ or ‘accountable’ user using the report to review the operations with his team. An ‘informational’ user is merely informed by the report. This simple classification is adequate for most purposes.

 

         Revisit each report with its direct and the supervisory users. Validate the business purpose, the information content and the format – the format aspect of a report, though quite important, is not pursued further in this blog. There are some interesting and powerful opportunities at this step to restore true value: a) Check if the report is directly used by the user as such or if he does further processing to make it usable for its intended purpose. Very often, it is found, user makes some additional massaging to the numbers in the report: a missing summary or computing a ratio or a KPI, a comparison with a past period, etc. A significant efficiency contribution would to be to cut out this massaging b) More complex massaging is usually carried out in Excel. Can this be done away with or at least seamlessly integrated? c) This is an opportunity to ‘hard’ reconcile the supervisory perspective of a business aspect with the direct operational perspective. A no-brainer simplification is to ensure the Transaction Report goes to operating personnel and the related Summary Report goes to supervisory personnel and d) Review the list of ‘informational’ users of this report and reasons for their inclusion or exclusion. Mark candidates for inclusion/exclusion.

 

         These done, take the discussion to the broad plane of user’s responsibilities how the reports support those responsibilities. This would reveal those ‘missing’ views and reports – potential for creating value. It is not unusual to find system outputs not covering the entire breadth of user’s responsibilities or his KPI’s.

 

         Review with each informational user, the list of reports he receives and his thoughts on inclusions and exclusions. Go back to the direct and supervisory users of the reports to finalize the ‘informational’ inclusions and exclusions. At this point, the report may even undergo some changes to suit the needs of the informational users or some missing reports may again be uncovered.

 

         Note that a report with multiple ‘responsible’ users especially from different functional silos strongly indicates multiple business purposes stated or omitted.  And a report with multiple purposes is a strong candidate for splitting.

 

         Multiple reports for same or related purposes are good candidates for merging. When the business purpose is quite specific (not generic like ‘highlights cost-overruns’) their distribution lists could still be different if they present different perspectives. Do they?   

 

         Develop an exhaustive list of abnormal events that could occur in each functional silo and across silos. Relate each event to the Exception Report that shows it up. This may reveal events with potentially serious consequences being passed through. It is also important to check a) if these events are pointed to at the earliest possible instant after their occurrence b) the reporting intensity is commensurate with the degree of abnormality and c) recipients of the reports include all users concerned with the origin of the events and the organization’s consequent responses. Without sufficient care here, process breaks could severely impair the organization’s ability to respond.

 

         A report-type view of the system of reports also throws up useful but gross pointers to some imbalances. Absence of Log Reports may readily indicate holes in statutory compliance or weaknesses in security audit procedures and in some cases even recovery capabilities. Few Exception Reports may point to, as we have already seen, a failure to flag down significant abnormal events in the operations and the ability to quickly respond. Are the operating and supervisory personnel adequately (not overly) serviced, covering their major responsibilities and accountabilities with Transaction, Processing and Summary Reports? Similarly, are business purposes adequately and powerfully supported? Are functional silos bridged optimally?

 

It would be interesting to see if some principles of project portfolio management could be carried into this exercise of rationalizing system outputs. 

 

Like we have rigor in design of database (ERD, normalization…), this appears to be a ripe candidate for proposing a formal model and practice both for design and importantly for ongoing review.  

 

In summary, rationalizing the system outputs has ready pay-back in terms of managerial effectiveness by: a) re-engineering the outputs for maximum out business impact and operational efficiency b) weeding out redundancies in outputs as well as their distribution c) discovering opportunities for filling gaps and creating value for the business and d) making up for debilitating process breaks.

 

Importantly, note that IT application boundaries, their technology platforms or deployment architecture do not pose any problems in carrying out this exercise. Since change is constant with businesses, this cathartic effort is not likely to be single-shot.  

 

A potential service offering of real value from CIO’s stable? It has a quick turn-around and for most part may not need face-to-face or travel.

 

(concluded)

 

Read Full Post »

 

Reports and online views are organized presentation of information for ready comprehension and decision making. They form a major part of usable outputs of IT systems as the basis for managing the operations in an enterprise.  Yet, these outputs taken as a whole or individually are not subject to any kind of design rigor, except for their formats! Targeting this concern, this and a following blog introduce some basic concepts and build simple practices towards optimally designing this system of outputs. 

 

Today, reports are now viewable online and views are printable offline. Dashboards are a special kind of views that use graphic metaphors instead of rows and columns. The discussion here refers to reports and is equally applicable to other forms of ouputs. And the principles and practices outlined apply to reports that are planned ground up and developed for use and not to those reports that are designed and retrieved totally on-the-fly with a query-report engine or an analytics engine.

 

These ‘canned’ reports build up to a sizeable number in any application and have an abiding tendency to multiply weed-like much beyond the original plans. One only has to look at any ERP roll-out to see it in real, though this dangerous ‘disease’ is not limited to ERP solutions alone. Why is it a ‘disease’ and dangerous at that? Multiply the number of reports by number of recipient users and total them up to get total number of instances of these reports perused. Now multiply the number of instances of perused reports by 10 minutes (or some other number, less or more) which could be the average time any user spends with a report instance. This is the (crudely estimated) amount of time, possibly of the senior management, soaked up by these reports. Individually may not be very significant, but could collectively be quite substantial. In fact, it is simple to paralyze an organization without setting off alarms in any quarters – all that needs to be done is to ‘helpfully’ over-provision users in different parts of the organization with any number of reports!

 

The obvious remedy, common sense tells us, is to strongly question the need for any report and remove the redundancies. Before we look at the remedy more closely, let us look at what are these reports like and what are they generally used for:

 

a) Dump or Log or Scroll reports: these are records of every transaction processed by the application. There may be additional records showing the trail of events before and after the transactions. These reports are mainly used for statutory reasons, audit purposes, as historical archive and, sometimes, for information recovery (When the primary purpose is information recovery, the Dump may not be human-readable and is usually processed by a system utility. It is no longer considered as a report).   

 

b) Transaction Reports: these are reports of transactions filtered by some selection criteria, sorted, summed up and formatted. Prior period data may be included for comparison. These reports are of informative kind: which product sold how much in which region, which parts were supplied by a vendor, which orders were processed by a machine shop, etc. A drill-down facility may be available to track down the details of a transaction across functional silos. Usually these reports do not process the data beyond reporting them as such, except for some totaling or calculating percentages.  Useful for managers to monitor operations under their supervision. 

 

c) Summary Reports: these reports abstract out the transactions and focus more on various kinds of summaries of transactions. Of course, the drill-down may show underlying transactions. These reports are used by senior managers to monitor the performance of their areas of operation at an aggregate level. Dashboards could be placed under this type.

 

d) Processing Reports: these reports, as the name implies, may include significant amount of processing on underlying data. This processing is distinct from merely crunching the data for presentation by way of charts and graphs. Senior managers may use these reports to look at scenarios that are not intrinsically modeled in the enterprise applications. A typical example is to pull out raw data and apply some adjustment rules and produce final adjusted numbers. The downside to these reports is the danger of mixing up processing with presentation. In this way, processing is fragmented and is not standard across reports, leading to problems of reconciling different reports that work on the same data. For example, two reports on resource utilization may differ depending on how they process the utilization data. One may round off to nearest weeks and the other may process the data as is, in terms of days, without any round-off.

 

Often in ERP rollouts, loading good amount of processing logic into reports is a common practice, fearing the formidable alternative of customizing the ERP.

 

It is another matter that when the enterprise model is complex as with ERP solutions, reports (not limited to processing reports) may differ simply on where they pull their data from (ignoring for a moment differences in processing the data as mentioned above) and enormous efforts are wasted on reconciling the different reports. Going back to the example of reporting on utilization of human resources, the report pulling data from the HR function would not easily match with the report pulling data from the Projects function.

 

e) Exception Reports: these reports, different from alerts, draw the attention of operating personnel and managers especially to deviations from the established operating norms. It is easy to envisage exception reports in every aspect of operations. Example: A report recommending replenishment of specific stock items.  

 

And some of them are not directly related to the operations. For instance, exception reporting is very effective in spotlighting data validation and integrity errors for subsequent data correction. Security aspects like attempted security breaches are usually reported as exceptions.

 

The above taxonomy of reports is sufficient for the purpose of discussion here even if it is not all-inclusive. The report types are not mutually exclusive. A report on ageing of customer’s pending bill payments could be first considered as an exception report in as far as it is highlighting abnormal situation for follow-up. It may also qualify as a Summary Report. The function overrides the form.

 

Reports usually push for some organizational responses. Transaction and Summary Reports focus on performance of one or more entities and their interplay and provide the basis for some broad-based follow-up actions. Exception Reports provoke pointed actions to handle specific deviations. Dump Reports do not trigger any immediate response.     

 

With this background, we are ready to go back to the ‘disease’ and the common-sense remedy we talked about earlier.

 

At this point, it is more interesting to look at reports or views, taken as a whole or individually, in an enlarged perspective of how aligned they are to the business and not merely for the purposes of curbing the excesses. The impact of closely aligning the outputs to the needs of the business  would be positively beneficial, given that the organization depends mainly on these reports and views for life-signs and to manage its head to tail.

 

As mentioned at the outset, surprisingly, from a software engineering (or is it information engineering?) perspective, this important piece of an organization’s information systems has not been subject to much design rigor, formal or otherwise to optimally arrange for business alignment.

 

Will set off on this un-rutted path in a soon-to-be blog.

Read Full Post »