Feeds:
Posts
Comments

Posts Tagged ‘Business Analyst’

New Hatton Garden heist image

During a bank heist the Chief told the Sergeant to cover all exits so the robbers could not get away.

Later the Sergeant reports to the chief.

“Sorry sir but they got away.”

The chief very disappointed says, “I told you to cover all exits!”

“I did” replied the Sergeant. “but they got away through the entrance.”

That was in jest.

But in real it is not very different IT/Services sectors.

How often we tell our customers: “Mr. Customer, you never brought this up all this time and now…”

I’m reminded of an incident where a CFO kept on asking for more reports. The PM (Project Manager) in sheer disgust got his manager to rein him in: After the polite handshakes, the manager brought up the issue. He assured the CFO they were not reluctant to give him what he wants. But these would be regarded as chargeable change requests.

The CFO in even tone drew the manager’s attention to a line in the Work-Order.

It said “Account-Receivables”.

“You had claimed you’ve rolled it out at a number of other sites. And dont you know AR implies all these reports?”

End of discussion on scope creep.

When we unleash an untrained/inexperienced business analyst on our customer – these days many a youngster aspires to become one (business analyst, I mean) – he fails to see beyond what the customer says literally. And the customer goes away with the comforting thought the analyst has understood him in whole. The analyst may be likened to the stenographers of yester years operating strictly in the what-you-said-is-what-you-get mode – a great disservice to our customer.

An interesting aside: The image appearing above is one officailly released of the laocker room in the New Hatton Garden heist.  The burglars, it is beileved, first entered at about 9.20pm on 2 April and stayed until 8.05am the next morning, Good Friday. An alarm went off at 12.21am on 3 April, about three hours after the gang entered the vault area, according to timings released by police. The gang returned to the vault on Saturday 4 April at about 10.17pm, staying until 6.30am the next morning. Key staff were off work because of the Easter holiday and police were alerted to the burglary just after 8am on Tuesday 7 April. The alarm was recorded and transferred to the police’s computer-aided dispatch system. “A grade was applied to the call that meant that no police response was deemed to be required,” the statement released by the police said. “An internal investigation is ongoing to identify why this grade was applied to the call in conjunction with the alarm company.”

The final disposition of the investigation is not known. Am sure an interesting story is waiting to be unravelled.

End

 

 

Credits: ajokeaday.com and press for the info on the heist.

Advertisements

Read Full Post »

One of the biggest challenges in building software and systems and least appreciated is about drawing the line on what features are in and what are not. Whenever you catch the smell of feature creep, call for this modern parable. Or even better, in the project kickoff held right at the outset when expectations, success factors and scope are discussed, it may be a good idea to take your audience to this story during coffee-break:


Once upon a time, in a kingdom not far from here, a king summoned two of his advisors for a test. He showed them both a shiny metal box with two slots in the top, a control knob, and a lever. “What do you think this is?”

One advisor, an engineer, answered first. “It is a toaster,” he said.

The king asked, “How would you design an embedded computer for it?”

The engineer replied, “Using a four-bit microcontroller, I would write a simple program that reads the darkness knob and quantizes its position to one of 16 shades of darkness, from snow white to coal black. The program would use that darkness level as the index to a 16-element table of initial timer values. Then it would turn on the heating elements and start the timer with the initial value selected from the table. At the end of the time delay, it would turn off the heat and pop up the toast. Come back next week, and I’ll show you a working prototype.”

The second advisor, an IT Analyst, immediately recognized the danger of such short-sighted thinking. He said, “Toasters don’t just turn bread into toast, they are also used to warm frozen waffles. What you see before you is really a breakfast food cooker. As the subjects of your kingdom become more sophisticated, they will demand more capabilities. They will need a breakfast food cooker that can also cook sausage, fry bacon, and make scrambled eggs. A toaster that only makes toast will soon be obsolete. If we don’t look to the future, we will have to completely redesign the toaster in just a few years.”

“With this in mind, we can formulate a more intelligent solution to the problem. First, create a class of breakfast foods. Specialize this class into subclasses: grains, pork, and poultry. The specialization process should be repeated with grains divided into toast, muffins, pancakes, and waffles; pork divided into sausage, links, and bacon; and poultry divided into scrambled eggs, hard- boiled eggs, poached eggs, fried eggs, and various omelet classes.”

“The ham and cheese omelet class is worth special attention because it must inherit characteristics from the pork, dairy, and poultry classes. Thus, we see that the problem cannot be properly solved without multiple inheritance. At run time, the program must create the proper object and send a message to the object that says, ‘Cook yourself.’ The semantics of this message depend, of course, on the kind of object, so they have a different meaning to a piece of toast than to scrambled eggs.”

“Reviewing the process so far, we see that the analysis phase has revealed that the primary requirement is to cook any kind of breakfast food. In the design phase, we have discovered some derived requirements. Specifically, we need an object-oriented language with multiple inheritance. Of course, users don’t want the eggs to get cold while the bacon is frying, so concurrent processing is required, too.”

“We must not forget the user interface. The lever that lowers the food lacks versatility, and the darkness knob is confusing. Users won’t buy the product unless it has a user-friendly, graphical interface. When the breakfast cooker is plugged in, users should see a cowboy boot on the screen. Users click on it, and the message ‘Booting UNIX v.8.3’ appears on the screen. (UNIX 8.3 should be out by the time the product gets to the market.) Users can pull down a menu and click on the foods they want to cook.”

“Having made the wise decision of specifying the software first in the design phase, all that remains is to pick an adequate hardware platform for the implementation phase. An Intel 80386 with 8 MB of memory, a 30 MB hard disk, and a VGA monitor should be sufficient. If you select a multitasking, object oriented language that supports multiple inheritance and has a built-in GUI, writing the program will be a snap. (Imagine the difficulty we would have had if we had foolishly allowed a hardware-first design strategy to lock us into a four-bit microcontroller!).”

The king wisely had the IT Analyst beheaded, and they all lived happily ever after.

End
.
Credit: Unknown Usenet source (edited), wackywits.com, openclipart.com (seanujones) and public-domain-photos.com.

Read Full Post »

 

In these times, any organization responds by tightening its belt, by putting those new projects on back-burner. In its place, usually a number of quick-yielding initiatives are launched that are limited in scope, focused on results and often cut across functional silos. More often than not, these initiatives go below the IT radar and their roll-out has little IT support. In these times, this is a great opportunity for IT to step forward and support these initiatives effectively.

 

Let me present one such example of how IT made itself quite useful.

 

The organization is in the business of executing (fixed priced as well as time-and-material) software projects for its customers by deploying its software professionals on the rolls. It had a top-heavy structure. For making it more even-keeled and to reduce over-all costs of operation, a decision was taken to hire fresh-from-campus trainees (CT) in small numbers, induct them with adequate training and then staff them in projects. Intuitively it made a lot of sense to have some number of fresher recruits; they were skilled in contemporary technologies, they were high-energy’ed and performance driven.

 

The routiine HR reports did not have much to say especially about these trainees, whether the scheme was working and with what efficacy. Of course, it did show the cost per employee-month dropped somewhat in the monthly payroll. But what about the impact on the business?

 

For starters, IT decided to separately tag various batches of trainees inducted into the organization so that their performance could be tracked. There were two other models of hiring which were also concurrently in play. Some business-experienced, but not technology-ready professionals were taken in a Hire-Build-Deploy (HBD) model. These guys were sent out for intensive specialized training and brought back into the organization. There were also Lateral Hires (LH) who had some little experience and were a little ahead of the fresh-trainees in the learning curve. These lateral hires, unlike CT and HBD hires, were hired at any time of the year based on needs and not brought in batches; they were also not trained like the CT and HBD batches. These hires were tagged on a yearly bucket (example: 07 LH were guys laterally hired in the year 2007). So there were three different models and several batches of trainees/hires under these models that were uniquely tagged.

 

Now, IT created a few simple business-relevant views (and values) on these batches:

– How many months of billing each batch generated on an average in the first year and in their   second year in the organization (the tracking was limited to the first two years in the organization)?

How quickly did these hires become billable?

What kind of billing rates these hires realized?

Which Projects absorbed most trainees?…

 

 

Though the performance of the hires with regard to billing was not entirely in their own hands, nevertheless some useful pointers were obtained for the business. Expectedly LH did the best in over-all performance, followed by HBD and CT in that order. What was not expected was a detail: a LH candidate with one year total experience did better than an equivalent CT candidate with same one year experience. LH candidate perhaps showed the implicit advantage of the hiring process that, with prior wisdom, successfully matched the candidate’s profile to the demands of an available billing position.

 

Year-on-Year performance comparison of batches validated: a) the selection process was getting better in specifying and assessing skills and b) various improvements brought about in the induction training was paying off; and, it also pointed to available opportunities for doing even better. Projects that absorbed good amount of these hires presented both an opportunity as well as a risk of diluting quality factors for these customers, if overdone mindlessly. Clearly, the opportunity is for cutting back on the employee costs in the Project for the organization and passing on at least in part the gains to its customer too; and more importantly, how this practice could be replicated in other laggard Projects?    

 

To cut the long story short, these views were useful to the business to figure out a) if an initiative works for the organization b) which good practices need to be intensified and c) which practices need to be re-examined for better results. The prevailing enterprise applications do not support these over-night initiatives as well as required.

 

If only IT remains connected with these initiatives undertaken from time-to-time by an organization, there are plenty of opportunities for making a difference to the operations in terms of providing actionable insights and value. Its cross-functional vision enables it to support these initiatives uniquely and quite effectively. Of course, it requires IT to actively scan and sense these possibilities and step forward unbidden to offer active support. The opportunities may not come their way cut and dried and laid out neatly on a plate.

 

All of these apply even during ‘peace’ times.

 

This way, IT is in Business, good times or not!

Read Full Post »

 

A question I pop up often at software professionals is how do you evaluate a OO design. We assume presently functional completeness of the design is not in question.   The responses are interesting and various. They usually circle around: How well encapsulation, polymorphisms…are implemented in the design. How well reusability is used…. And some get into OO metrics.

 

I rarely get countered that the question is a wide open one; there are several aspects (some 20 plus non-functional attributes) to a design and which one do I have in mind for evaluating a design. After all design is a model for realizing both functional and non-functional user requirements.

 

If I were asked to be more specific on what is my chief concern in regard to design, I would then say it is the basic ability of the software to take in changes to its functionality over time. Changes to the functionality implemented in software are inevitable owing to the way an organization responds to internal and environmental shifts. With some software, these changes are easier to make and in some, it is gut-wrenching.   And, today, a good part of any IT (non-Capex) budget is spent on getting its software to change in step with the business needs.

 

So the concern over the software design being able to take changes in its stride is legitimate and important enough to say: the design that permits changes to be made more readily with less effort is a better design. Is this all about the usual non-functional attribute of ‘maintainability’? May be, in parts. I would like to think about it more as a legitimate evolution of the software while ‘maintenance’ connotes status quo. And today, the pace of this evolution has quickened even in ‘stable’ businesses.

 

Now let us proceed to figure out what possibly could be the criterion for evaluating the design from this perspective. This could also be turned on its head to ask how does one design that readily accommodates changes.

 

OO is already touted as a paradigm which is well suited to handle changes. Why? Because of its concepts such as encapsulation, inheritance, interface mechanism (?), etc. are suited to cope up with changes. Obviously, whichever design uses these features heavily, as shown by appropriate metrics or otherwise, is the way to go? 

 

This misses a crucial point. The initial functional requirements demand a set of abstractions. The design is best done by recognizing these abstractions and aligning its abstractions with the same. This is the true purport of all those OO guides that tell us how to identify candidate classes by listing out nouns from the problem description… If this is done as it should be, the initial alignment is ensured. This still does not guarantee the design as capable of coping up with changes to come.

 

The same principle applies to changes. Changes also demand a set of abstractions in the areas of change if they need to be handled later with minimal effort. A design that also aligns its abstractions with those in the areas of change is the one that truly delivers the promise of OO paradigm.

 

So the key to good design seem to lie outside of design phase! It is in the phase of assessing requirements; and, importantly, how these requirements would change in the foreseeable future. While we do a good job of the former, the latter has no place in our practice as yet! Not aware if formal methodologies for gathering and modeling requirements call for attention to this aspect. Is there a section distinctly devoted in the requirements document to foreseeable evolutionary changes? Not in 9+ cases out of 10. Not a wonder our systems are not well equipped to adapt to flow of time?

 

The software development community could come up with: “How can we foresee changes to come? If we could, we would provide for it from go.” This is strictly not true in all cases. It is not too difficult to figure out with the users which parts of the business processes are apt to change, if only we bring our questions to the user’s table specially targeting the future. Some are obvious in the trade and these are well taken care of even now.

 

Examples:

 

Tax laws: These could change from time to time.

 

Sales-person’s incentives or commission: The scheme for incentivising sales-persons changes from time to time even mid-year depending on the business objectives. In a healthy quarter, getting new clients may be important and in a sluggish quarter, mining current accounts may be the priority. Clearly the scheme needs to be abstracted.  

 

However, plans to open a new office, to start a new distribution channel, to introduce new pricing policy or new service offerings, to acquire a company…may not be uncovered in a routine study of requirements, the focus being on the present. Only a targeted probing with users may bring out these and other possible change triggers.  A word of caution is: the average user we engage with may not be wise to some of these plans!

 

In summary, a formal and focused business volatility analysis could be carried out with users at different levels of the organizational hierarchy so that the abstractions required by the business now and in future (to the foreseeable extent) are identified and the design abstractions are appropriately set up. The design abstractions could range form simple parameterization to more refined OO and variability techniques. The mode of deploying the changes also influences the choice of design technique.  

 

In fact it is a good idea to include a discussion on how the design would be impacted by anticipated and unanticipated changes in the user requirements: would the design abstractions take them in their stride elegantly or would it cause major upheavals. One recalls how in Operations Research, the algorithms provide for sensitivity analysis to figure out the impact on the computed solution if certain conditions were to change. Incidentally an earlier ‘Change Management’ post talks about the sensitivity of effort estimates to changes in user requirements.  

 

Is this a non-issue with packaged solutions like ERP? No, it is still an issue, perhaps to a lesser degree. Configuring a ERP solution for the current business practice is not a trivial effort. And when there are changes to current practice, reflecting these changes could turn out to be a minor or a significant effort depending on the degrees of freedom in the initial lay-out. For instance, consider organizations that frequently reorganize their operations – divisions and departments merge and split, get centralized and decentralized…The ERP could be elegantly re-configured for all these changes or it could be a snake’s pit depending on how it was set up initially.     

 

As an aside, abstractions in the requirements gathering phase may also be necessitated for an entirely different reason – the involved users may not be clear or articulate about their needs at that point of time or the scenario is in some kind of flux. These may get fleshed out later. Design abstractions must be able to cope up with these too. 

 

All along the software architects and the designers were required to think of abstractions. Now are we wanting our Business Analysts also to get into the groove? Yes, that’s the drift. 

 

How do we build systems for businesses which are intrinsically very volatile? Will look at it in a post to follow.

Read Full Post »