Archive for the ‘Project Genre’ Category


What we knew all along about the constraints in executing projects, Mike Shipulski has summed up very nicely in a short blog post, reproduced here below (no share button available) with comments added in italics:

There are four ways to run projects.

One – 80% Right, 100% Done, 100% On Time, 100% On Budget

  • Fix time
  • Fix resources
  • Flex scope and certainty

Set a tight timeline and use the people and budget you have.  You’ll be done on time, but you must accept a reduced scope (fewer bells and whistles) and less certainty of how the product/service will perform and how well it will be received by customers. This is a good way to go when you’re starting a new adventure or investigating new space.

Get it out there as early as possible, follow up with iterations/releases/sprints…Also for new products. Suitable where requirements are volatile or not understood clearly.

Two – 100% Right, 100% Done, 0% On Time, 0% On Budget

  • Fix resources
  • Fix scope and certainty
  • Flex time

Use the team and budget you have and tightly define the scope (features) and define the level of certainty required by your customers. Because you can’t predict when the project will be done, you’ll be late and over budget, but your offering will be right and customers will like it. Use this method when your brand is known for predictability and stability. But, be weary of business implications of being late to market.

Also for applications where failures have very low tolerance (example – public facing) or downright disastrous.  

Three – 100% Right, 100% Done, 100% On Time, 0% On Budget

  • Fix scope and certainty
  • Fix time
  • Flex resources

Tightly define the scope and level of certainty. Your customers will get what they expect and they’ll get it on time.  However, this method will be costly. If you hire contract resources, they will be expensive.  And if you use internal resources, you’ll have to stop one project to start this one. The benefits from the stopped project won’t be realized and will increase the effective cost to the company.  And even though time is fixed, this approach will likely be late.  It will take longer than planned to move resources from one project to another and will take longer than planned to hire contract resources and get them up and running.  Use this method if you’ve already established good working relationships with contract resources.  Avoid this method if you have difficulty stopping existing projects to start new ones.

‘Must be done at any cost’

Four – Not Right, Not Done, Not On Time, Not On Budget

  • Fix time
  • Fix resources
  • Fix scope and certainty

Though almost every project plan is based on this approach, it never works.  Sure, it would be great if it worked, but it doesn’t, it hasn’t and it won’t. There’s not enough time to do the right work, not enough money to get the work done on time and no one is willing to flex on scope and certainty.  Everyone knows it won’t work and we do it anyway.  The result – a stressful project that doesn’t deliver and no one feels good about.

Well, don’t we all know…

The article may be read here.






Source: Image from explore.easyprojects.net

Read Full Post »

A short post from Valeria Maltoni at conversationagent.com draws attention to a paper about health services reforms needed in Canada  wherein Dr. Sholom Glouberman and Dr. Brenda Zimmerman address how problems should be looked at.

The authors in their paper identify problems under three types: a) Simple b) Complicated and c) Complex. These are explained using this table:

Problem Types

The paper shows, in a real-life application in the healthcare domain, how the vicious cycle of ever-resource-hungry ER services – a sore point with many countries in the west – may be transformed into a virtuous cycle of providing needed services. All it calls for is a right perspective, regarding it as a complex problem and adopting an appropriate approach for this class of problems in seeking solutions.

A number of examples are cited to show how a wrong perspective of the problem – often one is seduced by prior experience to regard a truly complex problem as a complicated one amenable to our learned methods – leads to incorrect approaches resulting in undesired outcomes.

An amazing paper, I think, that forces us to relook at how we have been handling many seemingly intractable personal/professional/societal problems with little or mixed success.

Their paper has wide applicability far beyond its subject of medicare in Canada (dated 2013). Is accessible at:  http://publications.gc.ca/collections/Collection/CP32-79-8-2002E.pdf

And Valeria Maltoni’ insightful blog on a variety of topics backed by her enormous experience in the creative execution of integrated marketing and communication programs is available at: http://conversationagent.com

Happy reading!


Read Full Post »




Let us continue with the remaining steps in putting together the methodology for the core process of migration.


6. The Estimation Model is simply derived from the Transformation Model by putting down the time required for carrying out each step in the Transformation Model. Steps of (b) (only those not handled by grep-like utilities) and (c) kind, being manual, would be time consuming. Any inaccuracy in estimation for (a) kind of step will not significantly impact the over-all estimates, being tool-driven. 


7. Finally put together the Process Model for the project.  The Process Model is like a kitchen-book recipe for carrying out the migration. It lays down the entire conversion in terms of a sequence of steps cataloged in the Transformation Model to cover all instances of all code-classes in the source context.


A sample section of the Process Model for migrating a module ABC shows the following sequence of steps cataloged in the Transformation Model, limited to migrating a set of ASP code-class instances of the module ABC:


d.1: Use an identified tool X to convert 10 ASP pages of the module ABC. This is a step of (a) kind.


d.2: Use grep on these converted ASP pages to handle instances of change-unit-classes: 1, 3 and 6, a step of (b) kind not handled by the tool X.


d.3: Manually scan the converted pages to identify instances of change-unit-classes: 2, 4 and 5, again not handled by tool X or grep-like utilities in steps d.1 and d.2. Note manual scans are not explicitly included in the Transformation and Estimation Models. Also note that sometimes, a grep-like tool may also be used to flag these instances even if the conversion is done manually, thereby averting this manual scan.


d.4: Manually convert in all the 10 ASP pages, the flagged instances of change-unit-classes: 2, 4 and 5 as per the transformations cataloged in the Transformation Model.


In general, one or more partial or full manual scans of source or target code-class instances must be provided in the Process Model for: a) detecting instances of change-unit-classes that are not handled by the tools or by grep-like utilities b) to cover some special situations c) to look out for new change-unit-classes not listed in the Transformation Model and d) to check the target after conversion. Some over-all scan rates are assumed and factored into effort computation.


Note that the efforts required for this piece of migration in the Process Model could be computed, given the Estimation Model, and if the number of instances of those change-unit classes (2, 4 and 5) across the whole set of 10 ASP pages which need to be handled manually in step d.4 are known (not considering d.3 for simplicity). This is the key to estimating efforts in migration projects. In simple terms, find out all instances requiring manual conversions and what is the time taken for converting an instance.  This exercise may also be called as Impact Analysis. At project execution time, Impact Analysis is obviously done exhaustively covering all instances of code-classes to achieve error-less migration.  Whereas at the time of submitting a proposal or initially planning the timeline for the project, Impact Analysis is often done on a representative set of code-class instances and the results are projected across the complete set of code-class instances comprising the source application. Herein lies the serious risk of going wrong with the effort estimates. 


There is an important question that we have not addressed yet. Given the source context comprising numerous instances of many code-classes how do we sequence our steps in the Process Model? Note that the other two models, Transformation and Estimation Models, do not address this question.


Let us think of the migration project as made of migration units representing chunks of migration which may be progressively carried out and even verified. These chunks could be organized by function. For instance, a set of Oracle Forms in Order Entry or a set of web pages of Product Catalog in an e-commerce site, or they could be by code-class; for instance, all asp pages, all the code in data access layer, etc. It is usually a mix of both approaches. With the former, a migration unit has instances of multiple code-classes to be migrated. For instance, migrating a set of JSP pages would mean migrating the JSP pages themselves, Java beans used by these pages, possibly the CSS, etc.


The function-wise approach makes it possible to validate the migration methodology, models and tools early in the project. Once validated, migration could now be undertaken code-class-wise until sufficiency is reached to take up a function-wise migration and switch back and forth. Or, any other mix and sequence of the two approaches. 


We have, in the above, successfully laid out the key steps in the migration methodology which is neutral to technologies; and its principles are applicable to a wide range of migration projects: upgrade of infrastructure systems like mail, platform migration of application systems and even ERP version upgrades.


This piece focuses only on the core methodology of project execution. It does not cover initial phases like Inventorying, Pilot for validation and estimation or the closing phases like Testing, Data Migration, Cut-over or the pre-project phases of Initial assessment, Preparing the project proposal, etc., all of which are also rigorously methodology-driven.





Read Full Post »


In an earlier post on Project Genre, we talked about the characteristics of migration projects in general. And we had observed these projects are heavily process driven. A good part of the conversion would have to be carried out even without understanding the functionality of the code pieces under conversion.


The migration process executes a rigorously pre-conceived migration methodology. Let us look at some major steps in laying out the methodology of conversion whether the conversion itself is carried out manually or with the help of a tool:


Also let us reasonably assume the conversion is one-to-one with no enhancements or optimization for the target context attempted.


1. The source context (application to be migrated) has certain architecture. In this context, various code pieces and meta-data (asp/jsp pages, Java beans, Active-X, client-side scripts, Dll’s, html pages, css, cgi modules, database stored procedures, triggers, schema, xml files, forms lay-out files, etc.) are architected to work together. Let us flag each of these as a distinct code-class (like file extensions). And, in the application, there would be numerous instances of these code-classes.


2.  Similarly the target context also has code-classes architected together. Now establish the correspondence/mapping between source and target code-classes. Migration effort is less if the two code-classes are similar. This is where the migration tools shine with the advantages of speed and quality.  Tools are usually good at transforming instances of one code-class in the source context to a similar code-class instance in the target context. They may not achieve 100% conversion all by themselves. Certain constructs may have to be migrated manually. Prima-facie a tool that minimizes manual effort is a better one, other things being equal. A likely area where manual intervention may be required is when some features of the code-class in the source context are de-supported/deprecated in the corresponding code-class in the target context.


So, line up a set of tools after due evaluation that together migrate as many code-classes and with as less manual intervention as possible. 


3. Strategize how a code-class will be migrated (possibly rewritten) when there is no similar corresponding code-class supported in the target context. For instance, Active-X controls may be supported in the source context and not in the target context. The conversion could be totally manual or partially assisted by tools.


4. At this point, it is fairly clear what part of migration is driven by tools and what part needs to be handled manually.


5. Now set up the Transformation Model. The Model lists every step (well, almost) of transformation of the application. These steps fall into three kinds of buckets: (a) a tool-driven transformation (b) supplementary manual changes to make up for a tool’s deficiencies or to cover some special situations and (c) a total re-write of some code-class instances.


For easy comprehension, (a) and (b) could be organized tool-wise and (c) by code-class.


(b) actually consists of a number of different cases or change-unit-classes, each change-unit-class representing a single case of a source construct to be modified into an equivalent target construct. These are code snippets in the source not handled by the tools automatically. Many of the simpler change-unit-classes may be implemented by grep-like utilities and only the remaining need to be converted manually. Even when manual rewrite is necessary, it could be done without an over-all understanding of the processing logic. 


(c) is like any new development. Importantly, it would require an understanding of the functionality realized by that code-class instance.


From here, we’ll continue in the next post with the Estimation Model and the Process Model including sequencing of the steps.

Read Full Post »

In many cases we don’t seem to realize the benefit of recognizing a project’s genre as early as possible. And what happens? Methodologies and processes – perfect or imperfect – are forged by yet another set of guys in the organization and deployed in execution. This is both an avoidable waste and a serious risk.

Consider migration projects: a version upgrade of MS Exchange, legacy Cobol to Open Source, VB or VC++ to (Dot)Net, ASP to ASP(Dot)Net, etc. Some are readily recognizable as such at first glance and some show themselves up as migration projects only when looked under the hood. Some are migration projects in toto and some, only in parts. With so many flavors and variants, clubbing them all under the genre of migration projects seems to be an exercise in intellectual abstraction without any practical pay-off.

Not so soon. If these non-rewrite migrations are examined more closely, usually they do exhibit certain common characteristics:

– Usually no time is made available to fully understand the source system to be migrated. A large amount of code needs to be migrated in a short span of time. The corollary is: such a project is largely methodology/process driven. Recall the spate of Y2K projects. 

– The processes are usually designed around usage of some tools. But tools, in most cases, do not carry out a 100% migration. And some manual efforts, often representing a good part of the overall estimates, are required to finish the job completely.

– Estimates of time and effort are made based on a sample and are accurate to the extent the sample is representative of the whole.

– It is possible to establish some kind of a trade-off between the effort estimate and the quality of migration. Defects in migration are a given as with any software related activity. 

– A migration project is amenable to an assembly-line like staffing. One team may be carrying out impact analysis and another applying the transformation of source to target. 

– Since the source code is not comprehended to any great extent, heavy involvement of users is required to test.

– A pilot may be required to validate the methodology, tools and estimates and to reassure the user that all is well. 

– Enhancements and bug fixes are deferred until migration is completed and validated.  

– Migrating legacy data is a non-trivial task and needs to be thought thru.

A methodology woven around these characteristics takes much of the thinking out of planning and executing such projects. It can otherwise be quite daunting and risky without this canned wisdom. The software/tool vendors usually do a good job of putting together a step-by-step approach.  Rolling out a portal product or an e-commerce site are examples.  

It is a good practice to maintain a catalog of processes for various stereotyped projects. For a given project, the set of processes appropriate for the genre is pulled out from the catalog and customized to fit the boundaries of the project.

When the sheer variety defies useful abstraction, sub-typing the genre into narrower classes could be attempted.  


Read Full Post »