Feeds:
Posts
Comments

Posts Tagged ‘ERP’

A short post from Valeria Maltoni at conversationagent.com draws attention to a paper about health services reforms needed in Canada  wherein Dr. Sholom Glouberman and Dr. Brenda Zimmerman address how problems should be looked at.

The authors in their paper identify problems under three types: a) Simple b) Complicated and c) Complex. These are explained using this table:

Problem Types

The paper shows, in a real-life application in the healthcare domain, how the vicious cycle of ever-resource-hungry ER services – a sore point with many countries in the west – may be transformed into a virtuous cycle of providing needed services. All it calls for is a right perspective, regarding it as a complex problem and adopting an appropriate approach for this class of problems in seeking solutions.

A number of examples are cited to show how a wrong perspective of the problem – often one is seduced by prior experience to regard a truly complex problem as a complicated one amenable to our learned methods – leads to incorrect approaches resulting in undesired outcomes.

An amazing paper, I think, that forces us to relook at how we have been handling many seemingly intractable personal/professional/societal problems with little or mixed success.

Their paper has wide applicability far beyond its subject of medicare in Canada (dated 2013). Is accessible at:  http://publications.gc.ca/collections/Collection/CP32-79-8-2002E.pdf

And Valeria Maltoni’ insightful blog on a variety of topics backed by her enormous experience in the creative execution of integrated marketing and communication programs is available at: http://conversationagent.com

Happy reading!

End

Read Full Post »

 

A question I pop up often at software professionals is how do you evaluate a OO design. We assume presently functional completeness of the design is not in question.   The responses are interesting and various. They usually circle around: How well encapsulation, polymorphisms…are implemented in the design. How well reusability is used…. And some get into OO metrics.

 

I rarely get countered that the question is a wide open one; there are several aspects (some 20 plus non-functional attributes) to a design and which one do I have in mind for evaluating a design. After all design is a model for realizing both functional and non-functional user requirements.

 

If I were asked to be more specific on what is my chief concern in regard to design, I would then say it is the basic ability of the software to take in changes to its functionality over time. Changes to the functionality implemented in software are inevitable owing to the way an organization responds to internal and environmental shifts. With some software, these changes are easier to make and in some, it is gut-wrenching.   And, today, a good part of any IT (non-Capex) budget is spent on getting its software to change in step with the business needs.

 

So the concern over the software design being able to take changes in its stride is legitimate and important enough to say: the design that permits changes to be made more readily with less effort is a better design. Is this all about the usual non-functional attribute of ‘maintainability’? May be, in parts. I would like to think about it more as a legitimate evolution of the software while ‘maintenance’ connotes status quo. And today, the pace of this evolution has quickened even in ‘stable’ businesses.

 

Now let us proceed to figure out what possibly could be the criterion for evaluating the design from this perspective. This could also be turned on its head to ask how does one design that readily accommodates changes.

 

OO is already touted as a paradigm which is well suited to handle changes. Why? Because of its concepts such as encapsulation, inheritance, interface mechanism (?), etc. are suited to cope up with changes. Obviously, whichever design uses these features heavily, as shown by appropriate metrics or otherwise, is the way to go? 

 

This misses a crucial point. The initial functional requirements demand a set of abstractions. The design is best done by recognizing these abstractions and aligning its abstractions with the same. This is the true purport of all those OO guides that tell us how to identify candidate classes by listing out nouns from the problem description… If this is done as it should be, the initial alignment is ensured. This still does not guarantee the design as capable of coping up with changes to come.

 

The same principle applies to changes. Changes also demand a set of abstractions in the areas of change if they need to be handled later with minimal effort. A design that also aligns its abstractions with those in the areas of change is the one that truly delivers the promise of OO paradigm.

 

So the key to good design seem to lie outside of design phase! It is in the phase of assessing requirements; and, importantly, how these requirements would change in the foreseeable future. While we do a good job of the former, the latter has no place in our practice as yet! Not aware if formal methodologies for gathering and modeling requirements call for attention to this aspect. Is there a section distinctly devoted in the requirements document to foreseeable evolutionary changes? Not in 9+ cases out of 10. Not a wonder our systems are not well equipped to adapt to flow of time?

 

The software development community could come up with: “How can we foresee changes to come? If we could, we would provide for it from go.” This is strictly not true in all cases. It is not too difficult to figure out with the users which parts of the business processes are apt to change, if only we bring our questions to the user’s table specially targeting the future. Some are obvious in the trade and these are well taken care of even now.

 

Examples:

 

Tax laws: These could change from time to time.

 

Sales-person’s incentives or commission: The scheme for incentivising sales-persons changes from time to time even mid-year depending on the business objectives. In a healthy quarter, getting new clients may be important and in a sluggish quarter, mining current accounts may be the priority. Clearly the scheme needs to be abstracted.  

 

However, plans to open a new office, to start a new distribution channel, to introduce new pricing policy or new service offerings, to acquire a company…may not be uncovered in a routine study of requirements, the focus being on the present. Only a targeted probing with users may bring out these and other possible change triggers.  A word of caution is: the average user we engage with may not be wise to some of these plans!

 

In summary, a formal and focused business volatility analysis could be carried out with users at different levels of the organizational hierarchy so that the abstractions required by the business now and in future (to the foreseeable extent) are identified and the design abstractions are appropriately set up. The design abstractions could range form simple parameterization to more refined OO and variability techniques. The mode of deploying the changes also influences the choice of design technique.  

 

In fact it is a good idea to include a discussion on how the design would be impacted by anticipated and unanticipated changes in the user requirements: would the design abstractions take them in their stride elegantly or would it cause major upheavals. One recalls how in Operations Research, the algorithms provide for sensitivity analysis to figure out the impact on the computed solution if certain conditions were to change. Incidentally an earlier ‘Change Management’ post talks about the sensitivity of effort estimates to changes in user requirements.  

 

Is this a non-issue with packaged solutions like ERP? No, it is still an issue, perhaps to a lesser degree. Configuring a ERP solution for the current business practice is not a trivial effort. And when there are changes to current practice, reflecting these changes could turn out to be a minor or a significant effort depending on the degrees of freedom in the initial lay-out. For instance, consider organizations that frequently reorganize their operations – divisions and departments merge and split, get centralized and decentralized…The ERP could be elegantly re-configured for all these changes or it could be a snake’s pit depending on how it was set up initially.     

 

As an aside, abstractions in the requirements gathering phase may also be necessitated for an entirely different reason – the involved users may not be clear or articulate about their needs at that point of time or the scenario is in some kind of flux. These may get fleshed out later. Design abstractions must be able to cope up with these too. 

 

All along the software architects and the designers were required to think of abstractions. Now are we wanting our Business Analysts also to get into the groove? Yes, that’s the drift. 

 

How do we build systems for businesses which are intrinsically very volatile? Will look at it in a post to follow.

Read Full Post »

(contd.)

Let us go back to the order-entry example and its (single) business objective of reducing the time taken to enter error-free orders. We had developed a partial list of actions for assuring performance with regard to this objective, on two separate threads:

Business Execution thread:

– owner: customer, action: train end-users on orders and their entry.

– owner: customer, action: take up some process reengineering and rationalizing.

– owner: customer, action: make available end-users for training.

– owner: customer, action: adequate bandwidth to be provided to connect up with the back-end order-entry system.

IT Execution thread:

– owner: software service provider, action: train end-users on the new solution.

– owner: software service provide, action: design lightly loaded screens, minimize server visits…

– owner: software service provider, action: make the screens intuitively obvious, minimize clicks for main flow, use technologies like Ajax for rich user interface, generate meaningful error messages, provide useful defaults, drop-down selects, auto-fill, etc. to reduce data entry effort.

– owner: software service provider, action: include in requirements and design the feature to save a partially filled order; also it should be possible for another authorized user to later complete the partially filled order.

The customer’s organization owns the Business Execution thread while the IT Execution thread is driven by the service provider. In this example, there are only two broad organization-level ownerships shown. In general, there could be more specific roles, drawn from the stakeholders and their organizational structures.

Also, the actions read generic in the above. In real, they could get quite context specific in terms of the intents and measures (see examples below).

Some of the actions directly map as activities of a project plan. Example: training end-users on the new solution. And, some could go under SDLC guidelines for the project, gross or fine-grained. Examples: [A design guideline: the landing page should not exceed 60 kb]; [A feature-level coding standard: stored procedures handling multiple on-screen selections should use of table variables for simplicity and speed (this is fine-grained)].

Now, the guidelines strongly emanate from business objectives instead of degenerating into a huge ‘cut and pasted’ linear list. Even if the guidelines are already in place and not generated in this manner, it would still be useful to validate them along these lines to fill the gaps and weed out the unimportant. For instance, it may show that certain business-level failure syndromes may not have sufficient guidelines to protect – a gap to be plugged. It may also be possible to apply a sense of priority to the set of SDLC standards which always threaten to get prolific and often in mutual conflict.

Incidentally, an earlier post ‘Coding Standards – a chimera?’ spoke of the importance of relating the coding standards to architectural attributes and other keys. Here we are enlarging it to relate all SDLC standards as strongly as possible to business objectives which in turn form the basis for the said architectural attributes.

Preemptive planning of this kind significantly enhances assurance of achieving intended business objectives.

In summary, the submission made here is to recognize and plan in a Business Execution thread in mesh with the IT Execution thread which usually is all of a conventional project plan. This stems from conceiving a project to include the main purpose of achieving business objectives instead of limiting it to software deliverables as is the current practice. With it, the clear statement that business objectives have a greater chance of being realized only if both the customer and the service provider plan and work together from day one with an end-to-end vision. On this road, the SDLC standards stand out with definite prioritized purpose.

Extending the scope of a project beyond the limits of software deliverables is an opportunity for the service provider to provide tangible business value, fraught with interesting possibilities for ongoing engagement with the customer. If ERP roll-outs are planned and executed along these lines (not the whole nine yards yet), why not bespoke development?

(concluded)

Read Full Post »

(contd.)

More on the actions that go into Business Execution and IT Execution threads and some new opportunities for service providers:

In a typical ERP roll-out, most actions go into the Business Execution (BE) thread. There would be actions in the IT Execution (ITE) thread to install, configure and customize the solution. But these actions are driven by actions in the BE thread.

Should that not also be the case with non-ERP solution development and roll out? Since the solution is being developed and is not ready off the shelf, that happens to be an intensive set of SDLC actions. Nevertheless the interlock with the BE thread must be maximized so that course corrections if any are applied at the earliest point of time.  Obviously these interlocks must be purposefully arranged for productive use of the customer’s time and not be contrived.

 

There is this famous curve usually produced often in project kick-off meetings which shows the actions on part of the customer and the developer in a typical SDLC project, without formally identifying the two threads as we have done. The customer involvement peaks initially during requirement gathering phase, drops off during the design and coding phases and again perks up during the acceptance testing phase. This is more the case in waterfall execution and can be adapted suitably for other paradigms of development as well.

 

The traditional SDLC model does not stretch itself (while ERP roll-outs may) to cover successful roll-outs. This is a crucial activity and the solution developed morphs into shelf-ware if the roll-out is not handled with professional competence and precision.

 

There is an even more important question of whether the business objectives of the project as originally conceived by its sponsor are achieved thru the usage of this solution. This immediately implies a period of usage on part of the end-users after the roll-out.

 

The ERP world makes a weak attempt by way of a post-implementation audit service, while the traditional SDLC model completely disassociates with this question. So, between the roll-out and the audit, who is concerned with the progress made? Can it be left to the customer to drive this march without any inputs of professional expertise? What are those sign-posts for success and the failure syndromes? Shouldn’t there be a well-planned and sustained effort during this critical phase? Note this is different from the L1-L2-L3-L4 support of Application Management Services.

 

The plea made here is: this is a definite opportunity for the service provider to enlarge his bouquet of services and value he delivers to cover these milestones and the customer has some reassurance towards achieving the intended benefits.

 

Some mature customers even now do plan end-to-end and arrange for themselves assistance of this kind either from the service provider or from some other agency. In any case there seems to be a clear need for formally structuring the actions and defining the services and deliverables associated with the two milestones. And plan them in from the ‘go’. This can also lead to some risk-reward models for the service provider, once these services mature.

 

(More to follow)

Read Full Post »

 

(contd.)

 

Before we get to the core, I must mention about an interesting solution that kind of addresses concerns voiced about mixing up business logic in Processing Reports. Recently I came across a case study where a MNC had the usual problem of reporting from a variety of dispersed and disparate sources such as Excel sheets, legacy ERP systems, etc. The reports were quite complex. And, there was no single container to host all the processing logic. The Organization deployed an ETL software that fitted the bill and loaded the data into a Reporting Database! This SQL database was used purely for reporting purposes only. On this database, they also built their business logic as a uniform interface and pulled out their reports. This architecture certainly fixed the problem of scattered business logic. The data is still transactional and the database was not a data-warehouse kind.

 

Even when there is a container application like an ERP instance to host the business logic and the reports, this solution may become an alternative meriting serious consideration when the reports are quite complex. The downsides to this approach are: a) introduction of an intermediate step and possible time delays b) the output business logic is still separated from transaction related business logic c) views may be generated from the ERP instance or from the Reporting Database – the challenge of making them look alike and d) the reports have to be coded explicitly instead of using the ERP-native report-generator. And, all the associated maintenance issues.

 

How would I, as an IT professional (not as a management consultant), go about if I need to rationalize the output system of reports (and views) for maximum business impact? An exercise that may be applied to a system of reports that exists already or to reports that are being planned for a new application. While some steps are obvious, some are not. The obvious steps (especially in a IT-mature organization) are included here for completeness:

 

         Compile a list of reports that need to be subjected to this exercise of rationalization.

 

         Develop the business purpose of each report. Weed out duplicate ways of stating the same purpose. Qualifiers are useful in generating variants of a business purpose: Shows payment-pending invoices by Region, Shows payment-pending invoices by Office, Shows payment-pending invoices by Customer, Shows payment-pending invoices by Product-line, and Shows payment-pending invoices by Period, etc. 

 

         One may or may not have the option of meaningfully (re)naming the reports pointing to their purpose.  

 

         Do a preliminary check if the information content of the report supports the business purpose. The depth of this check depends on IT professional’s knowledge of the business and best practices in the domain. 

 

         Generate a reference matrix showing reports and their users. These users are grouped under their functional silos: Finance/Accounts-Receivables, HR/Payroll, etc.

 

         Classify the users for each report: He may be a ‘direct’ or ‘responsible’ user using the report for managing his operations; Or a ‘supervisory’ or ‘accountable’ user using the report to review the operations with his team. An ‘informational’ user is merely informed by the report. This simple classification is adequate for most purposes.

 

         Revisit each report with its direct and the supervisory users. Validate the business purpose, the information content and the format – the format aspect of a report, though quite important, is not pursued further in this blog. There are some interesting and powerful opportunities at this step to restore true value: a) Check if the report is directly used by the user as such or if he does further processing to make it usable for its intended purpose. Very often, it is found, user makes some additional massaging to the numbers in the report: a missing summary or computing a ratio or a KPI, a comparison with a past period, etc. A significant efficiency contribution would to be to cut out this massaging b) More complex massaging is usually carried out in Excel. Can this be done away with or at least seamlessly integrated? c) This is an opportunity to ‘hard’ reconcile the supervisory perspective of a business aspect with the direct operational perspective. A no-brainer simplification is to ensure the Transaction Report goes to operating personnel and the related Summary Report goes to supervisory personnel and d) Review the list of ‘informational’ users of this report and reasons for their inclusion or exclusion. Mark candidates for inclusion/exclusion.

 

         These done, take the discussion to the broad plane of user’s responsibilities how the reports support those responsibilities. This would reveal those ‘missing’ views and reports – potential for creating value. It is not unusual to find system outputs not covering the entire breadth of user’s responsibilities or his KPI’s.

 

         Review with each informational user, the list of reports he receives and his thoughts on inclusions and exclusions. Go back to the direct and supervisory users of the reports to finalize the ‘informational’ inclusions and exclusions. At this point, the report may even undergo some changes to suit the needs of the informational users or some missing reports may again be uncovered.

 

         Note that a report with multiple ‘responsible’ users especially from different functional silos strongly indicates multiple business purposes stated or omitted.  And a report with multiple purposes is a strong candidate for splitting.

 

         Multiple reports for same or related purposes are good candidates for merging. When the business purpose is quite specific (not generic like ‘highlights cost-overruns’) their distribution lists could still be different if they present different perspectives. Do they?   

 

         Develop an exhaustive list of abnormal events that could occur in each functional silo and across silos. Relate each event to the Exception Report that shows it up. This may reveal events with potentially serious consequences being passed through. It is also important to check a) if these events are pointed to at the earliest possible instant after their occurrence b) the reporting intensity is commensurate with the degree of abnormality and c) recipients of the reports include all users concerned with the origin of the events and the organization’s consequent responses. Without sufficient care here, process breaks could severely impair the organization’s ability to respond.

 

         A report-type view of the system of reports also throws up useful but gross pointers to some imbalances. Absence of Log Reports may readily indicate holes in statutory compliance or weaknesses in security audit procedures and in some cases even recovery capabilities. Few Exception Reports may point to, as we have already seen, a failure to flag down significant abnormal events in the operations and the ability to quickly respond. Are the operating and supervisory personnel adequately (not overly) serviced, covering their major responsibilities and accountabilities with Transaction, Processing and Summary Reports? Similarly, are business purposes adequately and powerfully supported? Are functional silos bridged optimally?

 

It would be interesting to see if some principles of project portfolio management could be carried into this exercise of rationalizing system outputs. 

 

Like we have rigor in design of database (ERD, normalization…), this appears to be a ripe candidate for proposing a formal model and practice both for design and importantly for ongoing review.  

 

In summary, rationalizing the system outputs has ready pay-back in terms of managerial effectiveness by: a) re-engineering the outputs for maximum out business impact and operational efficiency b) weeding out redundancies in outputs as well as their distribution c) discovering opportunities for filling gaps and creating value for the business and d) making up for debilitating process breaks.

 

Importantly, note that IT application boundaries, their technology platforms or deployment architecture do not pose any problems in carrying out this exercise. Since change is constant with businesses, this cathartic effort is not likely to be single-shot.  

 

A potential service offering of real value from CIO’s stable? It has a quick turn-around and for most part may not need face-to-face or travel.

 

(concluded)

 

Read Full Post »

 

Reports and online views are organized presentation of information for ready comprehension and decision making. They form a major part of usable outputs of IT systems as the basis for managing the operations in an enterprise.  Yet, these outputs taken as a whole or individually are not subject to any kind of design rigor, except for their formats! Targeting this concern, this and a following blog introduce some basic concepts and build simple practices towards optimally designing this system of outputs. 

 

Today, reports are now viewable online and views are printable offline. Dashboards are a special kind of views that use graphic metaphors instead of rows and columns. The discussion here refers to reports and is equally applicable to other forms of ouputs. And the principles and practices outlined apply to reports that are planned ground up and developed for use and not to those reports that are designed and retrieved totally on-the-fly with a query-report engine or an analytics engine.

 

These ‘canned’ reports build up to a sizeable number in any application and have an abiding tendency to multiply weed-like much beyond the original plans. One only has to look at any ERP roll-out to see it in real, though this dangerous ‘disease’ is not limited to ERP solutions alone. Why is it a ‘disease’ and dangerous at that? Multiply the number of reports by number of recipient users and total them up to get total number of instances of these reports perused. Now multiply the number of instances of perused reports by 10 minutes (or some other number, less or more) which could be the average time any user spends with a report instance. This is the (crudely estimated) amount of time, possibly of the senior management, soaked up by these reports. Individually may not be very significant, but could collectively be quite substantial. In fact, it is simple to paralyze an organization without setting off alarms in any quarters – all that needs to be done is to ‘helpfully’ over-provision users in different parts of the organization with any number of reports!

 

The obvious remedy, common sense tells us, is to strongly question the need for any report and remove the redundancies. Before we look at the remedy more closely, let us look at what are these reports like and what are they generally used for:

 

a) Dump or Log or Scroll reports: these are records of every transaction processed by the application. There may be additional records showing the trail of events before and after the transactions. These reports are mainly used for statutory reasons, audit purposes, as historical archive and, sometimes, for information recovery (When the primary purpose is information recovery, the Dump may not be human-readable and is usually processed by a system utility. It is no longer considered as a report).   

 

b) Transaction Reports: these are reports of transactions filtered by some selection criteria, sorted, summed up and formatted. Prior period data may be included for comparison. These reports are of informative kind: which product sold how much in which region, which parts were supplied by a vendor, which orders were processed by a machine shop, etc. A drill-down facility may be available to track down the details of a transaction across functional silos. Usually these reports do not process the data beyond reporting them as such, except for some totaling or calculating percentages.  Useful for managers to monitor operations under their supervision. 

 

c) Summary Reports: these reports abstract out the transactions and focus more on various kinds of summaries of transactions. Of course, the drill-down may show underlying transactions. These reports are used by senior managers to monitor the performance of their areas of operation at an aggregate level. Dashboards could be placed under this type.

 

d) Processing Reports: these reports, as the name implies, may include significant amount of processing on underlying data. This processing is distinct from merely crunching the data for presentation by way of charts and graphs. Senior managers may use these reports to look at scenarios that are not intrinsically modeled in the enterprise applications. A typical example is to pull out raw data and apply some adjustment rules and produce final adjusted numbers. The downside to these reports is the danger of mixing up processing with presentation. In this way, processing is fragmented and is not standard across reports, leading to problems of reconciling different reports that work on the same data. For example, two reports on resource utilization may differ depending on how they process the utilization data. One may round off to nearest weeks and the other may process the data as is, in terms of days, without any round-off.

 

Often in ERP rollouts, loading good amount of processing logic into reports is a common practice, fearing the formidable alternative of customizing the ERP.

 

It is another matter that when the enterprise model is complex as with ERP solutions, reports (not limited to processing reports) may differ simply on where they pull their data from (ignoring for a moment differences in processing the data as mentioned above) and enormous efforts are wasted on reconciling the different reports. Going back to the example of reporting on utilization of human resources, the report pulling data from the HR function would not easily match with the report pulling data from the Projects function.

 

e) Exception Reports: these reports, different from alerts, draw the attention of operating personnel and managers especially to deviations from the established operating norms. It is easy to envisage exception reports in every aspect of operations. Example: A report recommending replenishment of specific stock items.  

 

And some of them are not directly related to the operations. For instance, exception reporting is very effective in spotlighting data validation and integrity errors for subsequent data correction. Security aspects like attempted security breaches are usually reported as exceptions.

 

The above taxonomy of reports is sufficient for the purpose of discussion here even if it is not all-inclusive. The report types are not mutually exclusive. A report on ageing of customer’s pending bill payments could be first considered as an exception report in as far as it is highlighting abnormal situation for follow-up. It may also qualify as a Summary Report. The function overrides the form.

 

Reports usually push for some organizational responses. Transaction and Summary Reports focus on performance of one or more entities and their interplay and provide the basis for some broad-based follow-up actions. Exception Reports provoke pointed actions to handle specific deviations. Dump Reports do not trigger any immediate response.     

 

With this background, we are ready to go back to the ‘disease’ and the common-sense remedy we talked about earlier.

 

At this point, it is more interesting to look at reports or views, taken as a whole or individually, in an enlarged perspective of how aligned they are to the business and not merely for the purposes of curbing the excesses. The impact of closely aligning the outputs to the needs of the business  would be positively beneficial, given that the organization depends mainly on these reports and views for life-signs and to manage its head to tail.

 

As mentioned at the outset, surprisingly, from a software engineering (or is it information engineering?) perspective, this important piece of an organization’s information systems has not been subject to much design rigor, formal or otherwise to optimally arrange for business alignment.

 

Will set off on this un-rutted path in a soon-to-be blog.

Read Full Post »