Posts Tagged ‘CIO’

An earlier post on ‘Enhancing Values’ talked about some simple ways of enhancing the value of custom-made software. This post is also on the question of enhancing the value delivered by a software solution, with a slightly different perspective.

Projects involving development of custom-software are especially great opportunities to deliver significant and special punch for the business; something an off-the-shelf software solution often falls short trying to address the needs of the widest cross-section of customers with the commonest capabilities.

These opportunities are not readily served at the table – they need to be excavated. The efforts get handsomely rewarded when the business gains in real from exercising the punch.

Very recently there is an interesting experience of this kind illustrating the point being made.

The organization is into manufacturing or sourcing basic equipment and executing infrastructure projects using the same. It is rolling out ERP for the whole enterprise. In the pre-sales phase, the marketing function receives the requirements from the customer. It responds with a design that includes the specs of major equipments. When the customer order is received, these go as inputs to the R & D. R & D prepares the engineering drawing using a PLM solution, prepares the BOM and triggers other processes in procurement, manufacturing and contracting functions.

The PLM and the ERP are not tightly integrated. The PLM generates a BOM for a drawing. The Codes for the Items in the BOM would have to be appended manually before the completed BOM could be uploaded into the ERP.

A simple application was needed to close this process gap.

The requirements were outlines as: the BOM would be imported into the application and would be processed for all Items contained in the BOM. For each Item in the BOM, the application would search its database (the Item Master would be periodically downloaded from the ERP; a real-time interface was not envisaged) and would produce a standard Code for the Item by matching the attributes specified for the Item in the BOM against the Item Master. For example, an electric motor may have its type, horse-power, number of poles, etc. specified as attributes. These would be used to obtain a match in the Item Master and retrieve the Code for the standard Item. For sheet-metal Items, the set of attributes was different. Once all Item Codes are appended, the BOM would be exported in a suitable format. This would be uploaded into the ERP by the ERP Support Cell. The process is complete with the filled-up BOM available for all down-stream processes.

Continuing with the requirements: The exception handling was a little more involved – when there was no match and the Item was new. New Item Codes are created in the ERP by a designated member in the ERP Support Cell. He sits a few tables away from the R & D section. How he creates new Item Codes is known to R & D. So the R & D takes it upon itself to design a Code for the new Item using his logic and send it to him as its recommendation along with a duly filled-in indent for creating this new Item in the ERP. In the meanwhile, assuming the new Item Code would be created in the ERP, R & D completed its BOM with standard and recommended new Item Codes and exported the BOM ready for import into the ERP.

Taking the simpler issue first: The last part of the requirements needed to be cleaned up. The process of R & D designing a new Item Code and proactively exporting the BOM with these new Codes plugged in is error-prone and hence cut out. In the revised wrinkle-free process, the R & D merely forwards its indent for a new Item Code to the member in the ERP Support Cell and waits for him to create the new Item and download it for this application. In the second run, the application will find now the Item Code. In this way the logic for creating a new Item Code could change independently without any impact. Of course, now it means that the BOM cannot be completed until the process of creating the Code and downloading the Item Master is completed. Multiple iterations could be avoided if in the first run itself, a consolidated indent for all new Items is generated to be processed by the ERP Support Cell in one shot. In this revised process all Codes are assigned by the application without any manual step.

Now to the nub: The requirements even at this stage miss an important business opportunity – is it not possible to negotiate on the BOM? After all, the BOM made up for over 70% of the total cost. Any savings on the BOM cost would reflect significantly on the last line. For every Item in the BOM, match or no match, there could be opportunities for a) substituting one Item by a cheaper equivalent b) use a similar Item (may not be an exact match) readily available in stock c) use a similar Item (may not be an exact match) from a more reliable supplier etc. etc.

When this important possibility was pointed out, user expressed legitimate fears of abuse of the negotiation capability to cut corners compromising quality for the customer. Negotiation on the BOM need not always mean dilution of specifications or quality. There could be genuine opportunities to affect some savings. The kinds of permissible negotiation and approval levels may be specified to guard against abuses. The second objection was: the equipment specs are laid down and costed by the customer-facing marketing function as part of its solution to meet the customer’s needs and hence are not negotiable. While this may be true, it is quite possible that certain attributes of an Item are non-negotiable while the other attributes are, leading to a few possibilities.

The user is still not very convinced about negotiating the BOM, but has promised to give it a deeper thought. If the user finally finds legitimate room for negotiation, the implication for the application is that it would be required to present a palette of exact (if there is one) and approximate matches when the Item Master is searched using a set of attributes. It could also indicate the impact of making a specific choice from available alternatives. It may also mean that the application may have to be aware of many other things like Work-In-Progress, etc. when it performs the search.

An analyst must tirelessly strive for providing in the software solution aggressive support to business objectives when he is engaged in the process of collecting requirements. A feel for the business domain certainly helps. Anything short is a missed opportunity!

Read Full Post »


In these times, any organization responds by tightening its belt, by putting those new projects on back-burner. In its place, usually a number of quick-yielding initiatives are launched that are limited in scope, focused on results and often cut across functional silos. More often than not, these initiatives go below the IT radar and their roll-out has little IT support. In these times, this is a great opportunity for IT to step forward and support these initiatives effectively.


Let me present one such example of how IT made itself quite useful.


The organization is in the business of executing (fixed priced as well as time-and-material) software projects for its customers by deploying its software professionals on the rolls. It had a top-heavy structure. For making it more even-keeled and to reduce over-all costs of operation, a decision was taken to hire fresh-from-campus trainees (CT) in small numbers, induct them with adequate training and then staff them in projects. Intuitively it made a lot of sense to have some number of fresher recruits; they were skilled in contemporary technologies, they were high-energy’ed and performance driven.


The routiine HR reports did not have much to say especially about these trainees, whether the scheme was working and with what efficacy. Of course, it did show the cost per employee-month dropped somewhat in the monthly payroll. But what about the impact on the business?


For starters, IT decided to separately tag various batches of trainees inducted into the organization so that their performance could be tracked. There were two other models of hiring which were also concurrently in play. Some business-experienced, but not technology-ready professionals were taken in a Hire-Build-Deploy (HBD) model. These guys were sent out for intensive specialized training and brought back into the organization. There were also Lateral Hires (LH) who had some little experience and were a little ahead of the fresh-trainees in the learning curve. These lateral hires, unlike CT and HBD hires, were hired at any time of the year based on needs and not brought in batches; they were also not trained like the CT and HBD batches. These hires were tagged on a yearly bucket (example: 07 LH were guys laterally hired in the year 2007). So there were three different models and several batches of trainees/hires under these models that were uniquely tagged.


Now, IT created a few simple business-relevant views (and values) on these batches:

– How many months of billing each batch generated on an average in the first year and in their   second year in the organization (the tracking was limited to the first two years in the organization)?

How quickly did these hires become billable?

What kind of billing rates these hires realized?

Which Projects absorbed most trainees?…



Though the performance of the hires with regard to billing was not entirely in their own hands, nevertheless some useful pointers were obtained for the business. Expectedly LH did the best in over-all performance, followed by HBD and CT in that order. What was not expected was a detail: a LH candidate with one year total experience did better than an equivalent CT candidate with same one year experience. LH candidate perhaps showed the implicit advantage of the hiring process that, with prior wisdom, successfully matched the candidate’s profile to the demands of an available billing position.


Year-on-Year performance comparison of batches validated: a) the selection process was getting better in specifying and assessing skills and b) various improvements brought about in the induction training was paying off; and, it also pointed to available opportunities for doing even better. Projects that absorbed good amount of these hires presented both an opportunity as well as a risk of diluting quality factors for these customers, if overdone mindlessly. Clearly, the opportunity is for cutting back on the employee costs in the Project for the organization and passing on at least in part the gains to its customer too; and more importantly, how this practice could be replicated in other laggard Projects?    


To cut the long story short, these views were useful to the business to figure out a) if an initiative works for the organization b) which good practices need to be intensified and c) which practices need to be re-examined for better results. The prevailing enterprise applications do not support these over-night initiatives as well as required.


If only IT remains connected with these initiatives undertaken from time-to-time by an organization, there are plenty of opportunities for making a difference to the operations in terms of providing actionable insights and value. Its cross-functional vision enables it to support these initiatives uniquely and quite effectively. Of course, it requires IT to actively scan and sense these possibilities and step forward unbidden to offer active support. The opportunities may not come their way cut and dried and laid out neatly on a plate.


All of these apply even during ‘peace’ times.


This way, IT is in Business, good times or not!

Read Full Post »


A question I pop up often at software professionals is how do you evaluate a OO design. We assume presently functional completeness of the design is not in question.   The responses are interesting and various. They usually circle around: How well encapsulation, polymorphisms…are implemented in the design. How well reusability is used…. And some get into OO metrics.


I rarely get countered that the question is a wide open one; there are several aspects (some 20 plus non-functional attributes) to a design and which one do I have in mind for evaluating a design. After all design is a model for realizing both functional and non-functional user requirements.


If I were asked to be more specific on what is my chief concern in regard to design, I would then say it is the basic ability of the software to take in changes to its functionality over time. Changes to the functionality implemented in software are inevitable owing to the way an organization responds to internal and environmental shifts. With some software, these changes are easier to make and in some, it is gut-wrenching.   And, today, a good part of any IT (non-Capex) budget is spent on getting its software to change in step with the business needs.


So the concern over the software design being able to take changes in its stride is legitimate and important enough to say: the design that permits changes to be made more readily with less effort is a better design. Is this all about the usual non-functional attribute of ‘maintainability’? May be, in parts. I would like to think about it more as a legitimate evolution of the software while ‘maintenance’ connotes status quo. And today, the pace of this evolution has quickened even in ‘stable’ businesses.


Now let us proceed to figure out what possibly could be the criterion for evaluating the design from this perspective. This could also be turned on its head to ask how does one design that readily accommodates changes.


OO is already touted as a paradigm which is well suited to handle changes. Why? Because of its concepts such as encapsulation, inheritance, interface mechanism (?), etc. are suited to cope up with changes. Obviously, whichever design uses these features heavily, as shown by appropriate metrics or otherwise, is the way to go? 


This misses a crucial point. The initial functional requirements demand a set of abstractions. The design is best done by recognizing these abstractions and aligning its abstractions with the same. This is the true purport of all those OO guides that tell us how to identify candidate classes by listing out nouns from the problem description… If this is done as it should be, the initial alignment is ensured. This still does not guarantee the design as capable of coping up with changes to come.


The same principle applies to changes. Changes also demand a set of abstractions in the areas of change if they need to be handled later with minimal effort. A design that also aligns its abstractions with those in the areas of change is the one that truly delivers the promise of OO paradigm.


So the key to good design seem to lie outside of design phase! It is in the phase of assessing requirements; and, importantly, how these requirements would change in the foreseeable future. While we do a good job of the former, the latter has no place in our practice as yet! Not aware if formal methodologies for gathering and modeling requirements call for attention to this aspect. Is there a section distinctly devoted in the requirements document to foreseeable evolutionary changes? Not in 9+ cases out of 10. Not a wonder our systems are not well equipped to adapt to flow of time?


The software development community could come up with: “How can we foresee changes to come? If we could, we would provide for it from go.” This is strictly not true in all cases. It is not too difficult to figure out with the users which parts of the business processes are apt to change, if only we bring our questions to the user’s table specially targeting the future. Some are obvious in the trade and these are well taken care of even now.




Tax laws: These could change from time to time.


Sales-person’s incentives or commission: The scheme for incentivising sales-persons changes from time to time even mid-year depending on the business objectives. In a healthy quarter, getting new clients may be important and in a sluggish quarter, mining current accounts may be the priority. Clearly the scheme needs to be abstracted.  


However, plans to open a new office, to start a new distribution channel, to introduce new pricing policy or new service offerings, to acquire a company…may not be uncovered in a routine study of requirements, the focus being on the present. Only a targeted probing with users may bring out these and other possible change triggers.  A word of caution is: the average user we engage with may not be wise to some of these plans!


In summary, a formal and focused business volatility analysis could be carried out with users at different levels of the organizational hierarchy so that the abstractions required by the business now and in future (to the foreseeable extent) are identified and the design abstractions are appropriately set up. The design abstractions could range form simple parameterization to more refined OO and variability techniques. The mode of deploying the changes also influences the choice of design technique.  


In fact it is a good idea to include a discussion on how the design would be impacted by anticipated and unanticipated changes in the user requirements: would the design abstractions take them in their stride elegantly or would it cause major upheavals. One recalls how in Operations Research, the algorithms provide for sensitivity analysis to figure out the impact on the computed solution if certain conditions were to change. Incidentally an earlier ‘Change Management’ post talks about the sensitivity of effort estimates to changes in user requirements.  


Is this a non-issue with packaged solutions like ERP? No, it is still an issue, perhaps to a lesser degree. Configuring a ERP solution for the current business practice is not a trivial effort. And when there are changes to current practice, reflecting these changes could turn out to be a minor or a significant effort depending on the degrees of freedom in the initial lay-out. For instance, consider organizations that frequently reorganize their operations – divisions and departments merge and split, get centralized and decentralized…The ERP could be elegantly re-configured for all these changes or it could be a snake’s pit depending on how it was set up initially.     


As an aside, abstractions in the requirements gathering phase may also be necessitated for an entirely different reason – the involved users may not be clear or articulate about their needs at that point of time or the scenario is in some kind of flux. These may get fleshed out later. Design abstractions must be able to cope up with these too. 


All along the software architects and the designers were required to think of abstractions. Now are we wanting our Business Analysts also to get into the groove? Yes, that’s the drift. 


How do we build systems for businesses which are intrinsically very volatile? Will look at it in a post to follow.

Read Full Post »


Let us go back to the order-entry example and its (single) business objective of reducing the time taken to enter error-free orders. We had developed a partial list of actions for assuring performance with regard to this objective, on two separate threads:

Business Execution thread:

– owner: customer, action: train end-users on orders and their entry.

– owner: customer, action: take up some process reengineering and rationalizing.

– owner: customer, action: make available end-users for training.

– owner: customer, action: adequate bandwidth to be provided to connect up with the back-end order-entry system.

IT Execution thread:

– owner: software service provider, action: train end-users on the new solution.

– owner: software service provide, action: design lightly loaded screens, minimize server visits…

– owner: software service provider, action: make the screens intuitively obvious, minimize clicks for main flow, use technologies like Ajax for rich user interface, generate meaningful error messages, provide useful defaults, drop-down selects, auto-fill, etc. to reduce data entry effort.

– owner: software service provider, action: include in requirements and design the feature to save a partially filled order; also it should be possible for another authorized user to later complete the partially filled order.

The customer’s organization owns the Business Execution thread while the IT Execution thread is driven by the service provider. In this example, there are only two broad organization-level ownerships shown. In general, there could be more specific roles, drawn from the stakeholders and their organizational structures.

Also, the actions read generic in the above. In real, they could get quite context specific in terms of the intents and measures (see examples below).

Some of the actions directly map as activities of a project plan. Example: training end-users on the new solution. And, some could go under SDLC guidelines for the project, gross or fine-grained. Examples: [A design guideline: the landing page should not exceed 60 kb]; [A feature-level coding standard: stored procedures handling multiple on-screen selections should use of table variables for simplicity and speed (this is fine-grained)].

Now, the guidelines strongly emanate from business objectives instead of degenerating into a huge ‘cut and pasted’ linear list. Even if the guidelines are already in place and not generated in this manner, it would still be useful to validate them along these lines to fill the gaps and weed out the unimportant. For instance, it may show that certain business-level failure syndromes may not have sufficient guidelines to protect – a gap to be plugged. It may also be possible to apply a sense of priority to the set of SDLC standards which always threaten to get prolific and often in mutual conflict.

Incidentally, an earlier post ‘Coding Standards – a chimera?’ spoke of the importance of relating the coding standards to architectural attributes and other keys. Here we are enlarging it to relate all SDLC standards as strongly as possible to business objectives which in turn form the basis for the said architectural attributes.

Preemptive planning of this kind significantly enhances assurance of achieving intended business objectives.

In summary, the submission made here is to recognize and plan in a Business Execution thread in mesh with the IT Execution thread which usually is all of a conventional project plan. This stems from conceiving a project to include the main purpose of achieving business objectives instead of limiting it to software deliverables as is the current practice. With it, the clear statement that business objectives have a greater chance of being realized only if both the customer and the service provider plan and work together from day one with an end-to-end vision. On this road, the SDLC standards stand out with definite prioritized purpose.

Extending the scope of a project beyond the limits of software deliverables is an opportunity for the service provider to provide tangible business value, fraught with interesting possibilities for ongoing engagement with the customer. If ERP roll-outs are planned and executed along these lines (not the whole nine yards yet), why not bespoke development?


Read Full Post »


More on the actions that go into Business Execution and IT Execution threads and some new opportunities for service providers:

In a typical ERP roll-out, most actions go into the Business Execution (BE) thread. There would be actions in the IT Execution (ITE) thread to install, configure and customize the solution. But these actions are driven by actions in the BE thread.

Should that not also be the case with non-ERP solution development and roll out? Since the solution is being developed and is not ready off the shelf, that happens to be an intensive set of SDLC actions. Nevertheless the interlock with the BE thread must be maximized so that course corrections if any are applied at the earliest point of time.  Obviously these interlocks must be purposefully arranged for productive use of the customer’s time and not be contrived.


There is this famous curve usually produced often in project kick-off meetings which shows the actions on part of the customer and the developer in a typical SDLC project, without formally identifying the two threads as we have done. The customer involvement peaks initially during requirement gathering phase, drops off during the design and coding phases and again perks up during the acceptance testing phase. This is more the case in waterfall execution and can be adapted suitably for other paradigms of development as well.


The traditional SDLC model does not stretch itself (while ERP roll-outs may) to cover successful roll-outs. This is a crucial activity and the solution developed morphs into shelf-ware if the roll-out is not handled with professional competence and precision.


There is an even more important question of whether the business objectives of the project as originally conceived by its sponsor are achieved thru the usage of this solution. This immediately implies a period of usage on part of the end-users after the roll-out.


The ERP world makes a weak attempt by way of a post-implementation audit service, while the traditional SDLC model completely disassociates with this question. So, between the roll-out and the audit, who is concerned with the progress made? Can it be left to the customer to drive this march without any inputs of professional expertise? What are those sign-posts for success and the failure syndromes? Shouldn’t there be a well-planned and sustained effort during this critical phase? Note this is different from the L1-L2-L3-L4 support of Application Management Services.


The plea made here is: this is a definite opportunity for the service provider to enlarge his bouquet of services and value he delivers to cover these milestones and the customer has some reassurance towards achieving the intended benefits.


Some mature customers even now do plan end-to-end and arrange for themselves assistance of this kind either from the service provider or from some other agency. In any case there seems to be a clear need for formally structuring the actions and defining the services and deliverables associated with the two milestones. And plan them in from the ‘go’. This can also lead to some risk-reward models for the service provider, once these services mature.


(More to follow)

Read Full Post »


A software service provider (applies to internal IT too) usually evaluates his own performance in regard to his customer’s projects on costs, effort and timeline. A project is considered successfully completed if the targets are met on these metrics. While a project may be successfully completed and deployed, it may not still achieve end-to-end business objectives set out initially by the project sponsor in the business case, for various reasons. The one reason is that often the service provider only sees the requirements detailed out to him and not the business objectives themselves. Could the project be planned more holistically if the service provider is exposed to the business objectives as well? Let us see how this could be done.


For example, consider the business objective: the time taken to enter an error-free customer order into the system needs to be reduced (metrics omitted).


For assuring performance in regard to this objective, it is translated into a set of actions necessary on part of the stakeholders.  These actions could be developed using any method. Here, we imagine the contrary, examine reasons and develop preemptive actions.  


Continuing with the above example of order-entry, the business objective of reducing the time taken to enter error-free orders may not successful because of one or more of the following reasons (alongside, a possible preemptive action to protect against this cause of failure is indicated and its ownership):


– The end-users (customer’s order-entry personnel) are not familiar with the business aspects of a customer-order especially about the changes from earlier practices (owner: customer, action: train end-users on orders and their entry).


– The types of customer-orders and their entry are too various and difficult to comprehend (owner: customer. action: take up some process reengineering and rationalizing; however, much of it could lie outside the scope in the current context of developing a software solution for order-entry).    


– The end-users are not trained on effectively using this software (owner: software service provider, action: train end-users on the new solution; owner: customer, action: make available end-users for training).   


– The system is sluggish in its responses when an order is entered (owner: customer, action: provide adequate bandwidth to connect up with the back-end order-entry system; owner: software service provide, action: design lightly loaded screens, minimize server visits…)


– The users have to do more work to get an order entered into the system (owner: software service provider, action: make the screens intuitively obvious, minimize clicks for main flow, use technologies like Ajax for rich user interface, generate meaningful error messages, provide defaults to reduce data entry effort…).


– If the order-entry is interrupted for some reason, user has to start all over again (owner: software service provider, action: include in requirements and design the feature to save a partially filled order; also it should be possible for another authorized user to later complete the partially filled order). 


The above list though not all-inclusive is sufficient for making the point.


Realization of business objectives in a project could be viewed as a combination of ‘Business Execution’ and ‘IT Execution’ threads. Actions such as rationalizing order types and their processing, training the end-users, etc. come under Business Execution thread while actions of SDLC and infrastructure kinds go under IT Execution.


Now the point being made is: a unified project plan must include actions from both the threads for assurance on the achievement of the business objectives, and their synchronization. Even though it is still only an assurance and not a guarantee for results, this holistic approach is likely to yield better end-to-end results. This is in sharp contrast to the current practice wherein the Business Execution thread is at best a diffused set of activities and ownership, and not tightly managed. And the business objectives are not very visible to the service provider.


It is not as if the all actions in Business Execution thread are owned by the customer and similarly all actions in IT Execution thread, by the service provider. Example: owner: customer, action: provide adequate bandwidth to connect up with the back-end order-entry system goes under IT Execution thread as mentioned earlier.


It would be nice if the planning tool allows the threads to be managed individually and collectively in a composite plan with all interdependencies coded.


Who manages the Business Execution thread? A simple straightforward answer is a manager designated by the sponsor in customer organization. Depending on the kind of activities on this thread, other variations are possible. In simple cases, the service provider’s project manager could assume overall responsibility, keeping in mind that the project management competency is what he brings to the table. Even now he manages actions that call for heavy participation of the customer: scope management, change management, risk management, knowledge transfer, testing and approving deliverables. This enhanced responsibility is clearly an opportunity for the customer to derive more value from the service provider and for the service provider to differentiate his services.


Regardless of who manages it, the Business Execution thread is recognized for what it is, planned and meshed with the IT Execution thread.


On the side of the business, it is no longer the traditional role of someone acting merely as the single-point contact for the service provider. It is a more demanding role of one who manages an important thread of actions in the project.


(More to follow)

Read Full Post »




Before we get to the core, I must mention about an interesting solution that kind of addresses concerns voiced about mixing up business logic in Processing Reports. Recently I came across a case study where a MNC had the usual problem of reporting from a variety of dispersed and disparate sources such as Excel sheets, legacy ERP systems, etc. The reports were quite complex. And, there was no single container to host all the processing logic. The Organization deployed an ETL software that fitted the bill and loaded the data into a Reporting Database! This SQL database was used purely for reporting purposes only. On this database, they also built their business logic as a uniform interface and pulled out their reports. This architecture certainly fixed the problem of scattered business logic. The data is still transactional and the database was not a data-warehouse kind.


Even when there is a container application like an ERP instance to host the business logic and the reports, this solution may become an alternative meriting serious consideration when the reports are quite complex. The downsides to this approach are: a) introduction of an intermediate step and possible time delays b) the output business logic is still separated from transaction related business logic c) views may be generated from the ERP instance or from the Reporting Database – the challenge of making them look alike and d) the reports have to be coded explicitly instead of using the ERP-native report-generator. And, all the associated maintenance issues.


How would I, as an IT professional (not as a management consultant), go about if I need to rationalize the output system of reports (and views) for maximum business impact? An exercise that may be applied to a system of reports that exists already or to reports that are being planned for a new application. While some steps are obvious, some are not. The obvious steps (especially in a IT-mature organization) are included here for completeness:


         Compile a list of reports that need to be subjected to this exercise of rationalization.


         Develop the business purpose of each report. Weed out duplicate ways of stating the same purpose. Qualifiers are useful in generating variants of a business purpose: Shows payment-pending invoices by Region, Shows payment-pending invoices by Office, Shows payment-pending invoices by Customer, Shows payment-pending invoices by Product-line, and Shows payment-pending invoices by Period, etc. 


         One may or may not have the option of meaningfully (re)naming the reports pointing to their purpose.  


         Do a preliminary check if the information content of the report supports the business purpose. The depth of this check depends on IT professional’s knowledge of the business and best practices in the domain. 


         Generate a reference matrix showing reports and their users. These users are grouped under their functional silos: Finance/Accounts-Receivables, HR/Payroll, etc.


         Classify the users for each report: He may be a ‘direct’ or ‘responsible’ user using the report for managing his operations; Or a ‘supervisory’ or ‘accountable’ user using the report to review the operations with his team. An ‘informational’ user is merely informed by the report. This simple classification is adequate for most purposes.


         Revisit each report with its direct and the supervisory users. Validate the business purpose, the information content and the format – the format aspect of a report, though quite important, is not pursued further in this blog. There are some interesting and powerful opportunities at this step to restore true value: a) Check if the report is directly used by the user as such or if he does further processing to make it usable for its intended purpose. Very often, it is found, user makes some additional massaging to the numbers in the report: a missing summary or computing a ratio or a KPI, a comparison with a past period, etc. A significant efficiency contribution would to be to cut out this massaging b) More complex massaging is usually carried out in Excel. Can this be done away with or at least seamlessly integrated? c) This is an opportunity to ‘hard’ reconcile the supervisory perspective of a business aspect with the direct operational perspective. A no-brainer simplification is to ensure the Transaction Report goes to operating personnel and the related Summary Report goes to supervisory personnel and d) Review the list of ‘informational’ users of this report and reasons for their inclusion or exclusion. Mark candidates for inclusion/exclusion.


         These done, take the discussion to the broad plane of user’s responsibilities how the reports support those responsibilities. This would reveal those ‘missing’ views and reports – potential for creating value. It is not unusual to find system outputs not covering the entire breadth of user’s responsibilities or his KPI’s.


         Review with each informational user, the list of reports he receives and his thoughts on inclusions and exclusions. Go back to the direct and supervisory users of the reports to finalize the ‘informational’ inclusions and exclusions. At this point, the report may even undergo some changes to suit the needs of the informational users or some missing reports may again be uncovered.


         Note that a report with multiple ‘responsible’ users especially from different functional silos strongly indicates multiple business purposes stated or omitted.  And a report with multiple purposes is a strong candidate for splitting.


         Multiple reports for same or related purposes are good candidates for merging. When the business purpose is quite specific (not generic like ‘highlights cost-overruns’) their distribution lists could still be different if they present different perspectives. Do they?   


         Develop an exhaustive list of abnormal events that could occur in each functional silo and across silos. Relate each event to the Exception Report that shows it up. This may reveal events with potentially serious consequences being passed through. It is also important to check a) if these events are pointed to at the earliest possible instant after their occurrence b) the reporting intensity is commensurate with the degree of abnormality and c) recipients of the reports include all users concerned with the origin of the events and the organization’s consequent responses. Without sufficient care here, process breaks could severely impair the organization’s ability to respond.


         A report-type view of the system of reports also throws up useful but gross pointers to some imbalances. Absence of Log Reports may readily indicate holes in statutory compliance or weaknesses in security audit procedures and in some cases even recovery capabilities. Few Exception Reports may point to, as we have already seen, a failure to flag down significant abnormal events in the operations and the ability to quickly respond. Are the operating and supervisory personnel adequately (not overly) serviced, covering their major responsibilities and accountabilities with Transaction, Processing and Summary Reports? Similarly, are business purposes adequately and powerfully supported? Are functional silos bridged optimally?


It would be interesting to see if some principles of project portfolio management could be carried into this exercise of rationalizing system outputs. 


Like we have rigor in design of database (ERD, normalization…), this appears to be a ripe candidate for proposing a formal model and practice both for design and importantly for ongoing review.  


In summary, rationalizing the system outputs has ready pay-back in terms of managerial effectiveness by: a) re-engineering the outputs for maximum out business impact and operational efficiency b) weeding out redundancies in outputs as well as their distribution c) discovering opportunities for filling gaps and creating value for the business and d) making up for debilitating process breaks.


Importantly, note that IT application boundaries, their technology platforms or deployment architecture do not pose any problems in carrying out this exercise. Since change is constant with businesses, this cathartic effort is not likely to be single-shot.  


A potential service offering of real value from CIO’s stable? It has a quick turn-around and for most part may not need face-to-face or travel.




Read Full Post »


Reports and online views are organized presentation of information for ready comprehension and decision making. They form a major part of usable outputs of IT systems as the basis for managing the operations in an enterprise.  Yet, these outputs taken as a whole or individually are not subject to any kind of design rigor, except for their formats! Targeting this concern, this and a following blog introduce some basic concepts and build simple practices towards optimally designing this system of outputs. 


Today, reports are now viewable online and views are printable offline. Dashboards are a special kind of views that use graphic metaphors instead of rows and columns. The discussion here refers to reports and is equally applicable to other forms of ouputs. And the principles and practices outlined apply to reports that are planned ground up and developed for use and not to those reports that are designed and retrieved totally on-the-fly with a query-report engine or an analytics engine.


These ‘canned’ reports build up to a sizeable number in any application and have an abiding tendency to multiply weed-like much beyond the original plans. One only has to look at any ERP roll-out to see it in real, though this dangerous ‘disease’ is not limited to ERP solutions alone. Why is it a ‘disease’ and dangerous at that? Multiply the number of reports by number of recipient users and total them up to get total number of instances of these reports perused. Now multiply the number of instances of perused reports by 10 minutes (or some other number, less or more) which could be the average time any user spends with a report instance. This is the (crudely estimated) amount of time, possibly of the senior management, soaked up by these reports. Individually may not be very significant, but could collectively be quite substantial. In fact, it is simple to paralyze an organization without setting off alarms in any quarters – all that needs to be done is to ‘helpfully’ over-provision users in different parts of the organization with any number of reports!


The obvious remedy, common sense tells us, is to strongly question the need for any report and remove the redundancies. Before we look at the remedy more closely, let us look at what are these reports like and what are they generally used for:


a) Dump or Log or Scroll reports: these are records of every transaction processed by the application. There may be additional records showing the trail of events before and after the transactions. These reports are mainly used for statutory reasons, audit purposes, as historical archive and, sometimes, for information recovery (When the primary purpose is information recovery, the Dump may not be human-readable and is usually processed by a system utility. It is no longer considered as a report).   


b) Transaction Reports: these are reports of transactions filtered by some selection criteria, sorted, summed up and formatted. Prior period data may be included for comparison. These reports are of informative kind: which product sold how much in which region, which parts were supplied by a vendor, which orders were processed by a machine shop, etc. A drill-down facility may be available to track down the details of a transaction across functional silos. Usually these reports do not process the data beyond reporting them as such, except for some totaling or calculating percentages.  Useful for managers to monitor operations under their supervision. 


c) Summary Reports: these reports abstract out the transactions and focus more on various kinds of summaries of transactions. Of course, the drill-down may show underlying transactions. These reports are used by senior managers to monitor the performance of their areas of operation at an aggregate level. Dashboards could be placed under this type.


d) Processing Reports: these reports, as the name implies, may include significant amount of processing on underlying data. This processing is distinct from merely crunching the data for presentation by way of charts and graphs. Senior managers may use these reports to look at scenarios that are not intrinsically modeled in the enterprise applications. A typical example is to pull out raw data and apply some adjustment rules and produce final adjusted numbers. The downside to these reports is the danger of mixing up processing with presentation. In this way, processing is fragmented and is not standard across reports, leading to problems of reconciling different reports that work on the same data. For example, two reports on resource utilization may differ depending on how they process the utilization data. One may round off to nearest weeks and the other may process the data as is, in terms of days, without any round-off.


Often in ERP rollouts, loading good amount of processing logic into reports is a common practice, fearing the formidable alternative of customizing the ERP.


It is another matter that when the enterprise model is complex as with ERP solutions, reports (not limited to processing reports) may differ simply on where they pull their data from (ignoring for a moment differences in processing the data as mentioned above) and enormous efforts are wasted on reconciling the different reports. Going back to the example of reporting on utilization of human resources, the report pulling data from the HR function would not easily match with the report pulling data from the Projects function.


e) Exception Reports: these reports, different from alerts, draw the attention of operating personnel and managers especially to deviations from the established operating norms. It is easy to envisage exception reports in every aspect of operations. Example: A report recommending replenishment of specific stock items.  


And some of them are not directly related to the operations. For instance, exception reporting is very effective in spotlighting data validation and integrity errors for subsequent data correction. Security aspects like attempted security breaches are usually reported as exceptions.


The above taxonomy of reports is sufficient for the purpose of discussion here even if it is not all-inclusive. The report types are not mutually exclusive. A report on ageing of customer’s pending bill payments could be first considered as an exception report in as far as it is highlighting abnormal situation for follow-up. It may also qualify as a Summary Report. The function overrides the form.


Reports usually push for some organizational responses. Transaction and Summary Reports focus on performance of one or more entities and their interplay and provide the basis for some broad-based follow-up actions. Exception Reports provoke pointed actions to handle specific deviations. Dump Reports do not trigger any immediate response.     


With this background, we are ready to go back to the ‘disease’ and the common-sense remedy we talked about earlier.


At this point, it is more interesting to look at reports or views, taken as a whole or individually, in an enlarged perspective of how aligned they are to the business and not merely for the purposes of curbing the excesses. The impact of closely aligning the outputs to the needs of the business  would be positively beneficial, given that the organization depends mainly on these reports and views for life-signs and to manage its head to tail.


As mentioned at the outset, surprisingly, from a software engineering (or is it information engineering?) perspective, this important piece of an organization’s information systems has not been subject to much design rigor, formal or otherwise to optimally arrange for business alignment.


Will set off on this un-rutted path in a soon-to-be blog.

Read Full Post »