Feeds:
Posts
Comments

Archive for the ‘OO’ Category

W(h)ither OOAD?

These days many software development projects are executed using agile model.

To be efficient, agile programming requires agile design.

And fortunately OOAD by first principles is agile. Example: I can begin modeling an organization even before I know about the various divisions of the organization. I can always extend the design as I become progressively aware of the divisions.

This ability to see ’one in many’ abstraction requires a certain line of thinking, by no means, common.

Recently I was reviewing a design – the application is around a spatial assembly of a number of entities. And each entity occurred in many variations.

Neither the Requirements nor the Design made any serious attempt to seek out the commonalities across the variations of each entity – kind of flat and unfolded. The team was ready to modularly code in a very straight-forward manner every variation as and when it was encountered.

On the downside, this approach means a design sure to offend the purists, less reuse and more effort in coding, testing and fixing. On the plus side no great skills of abstraction are required and the coding is also rather simple and fast. Given the all-round shortage of skills, this approach cannot be despised altogether?

A new trick the old dog must learn? Leaves me a little unsure of what I’m standing for.

End

Read Full Post »

 

A question I pop up often at software professionals is how do you evaluate a OO design. We assume presently functional completeness of the design is not in question.   The responses are interesting and various. They usually circle around: How well encapsulation, polymorphisms…are implemented in the design. How well reusability is used…. And some get into OO metrics.

 

I rarely get countered that the question is a wide open one; there are several aspects (some 20 plus non-functional attributes) to a design and which one do I have in mind for evaluating a design. After all design is a model for realizing both functional and non-functional user requirements.

 

If I were asked to be more specific on what is my chief concern in regard to design, I would then say it is the basic ability of the software to take in changes to its functionality over time. Changes to the functionality implemented in software are inevitable owing to the way an organization responds to internal and environmental shifts. With some software, these changes are easier to make and in some, it is gut-wrenching.   And, today, a good part of any IT (non-Capex) budget is spent on getting its software to change in step with the business needs.

 

So the concern over the software design being able to take changes in its stride is legitimate and important enough to say: the design that permits changes to be made more readily with less effort is a better design. Is this all about the usual non-functional attribute of ‘maintainability’? May be, in parts. I would like to think about it more as a legitimate evolution of the software while ‘maintenance’ connotes status quo. And today, the pace of this evolution has quickened even in ‘stable’ businesses.

 

Now let us proceed to figure out what possibly could be the criterion for evaluating the design from this perspective. This could also be turned on its head to ask how does one design that readily accommodates changes.

 

OO is already touted as a paradigm which is well suited to handle changes. Why? Because of its concepts such as encapsulation, inheritance, interface mechanism (?), etc. are suited to cope up with changes. Obviously, whichever design uses these features heavily, as shown by appropriate metrics or otherwise, is the way to go? 

 

This misses a crucial point. The initial functional requirements demand a set of abstractions. The design is best done by recognizing these abstractions and aligning its abstractions with the same. This is the true purport of all those OO guides that tell us how to identify candidate classes by listing out nouns from the problem description… If this is done as it should be, the initial alignment is ensured. This still does not guarantee the design as capable of coping up with changes to come.

 

The same principle applies to changes. Changes also demand a set of abstractions in the areas of change if they need to be handled later with minimal effort. A design that also aligns its abstractions with those in the areas of change is the one that truly delivers the promise of OO paradigm.

 

So the key to good design seem to lie outside of design phase! It is in the phase of assessing requirements; and, importantly, how these requirements would change in the foreseeable future. While we do a good job of the former, the latter has no place in our practice as yet! Not aware if formal methodologies for gathering and modeling requirements call for attention to this aspect. Is there a section distinctly devoted in the requirements document to foreseeable evolutionary changes? Not in 9+ cases out of 10. Not a wonder our systems are not well equipped to adapt to flow of time?

 

The software development community could come up with: “How can we foresee changes to come? If we could, we would provide for it from go.” This is strictly not true in all cases. It is not too difficult to figure out with the users which parts of the business processes are apt to change, if only we bring our questions to the user’s table specially targeting the future. Some are obvious in the trade and these are well taken care of even now.

 

Examples:

 

Tax laws: These could change from time to time.

 

Sales-person’s incentives or commission: The scheme for incentivising sales-persons changes from time to time even mid-year depending on the business objectives. In a healthy quarter, getting new clients may be important and in a sluggish quarter, mining current accounts may be the priority. Clearly the scheme needs to be abstracted.  

 

However, plans to open a new office, to start a new distribution channel, to introduce new pricing policy or new service offerings, to acquire a company…may not be uncovered in a routine study of requirements, the focus being on the present. Only a targeted probing with users may bring out these and other possible change triggers.  A word of caution is: the average user we engage with may not be wise to some of these plans!

 

In summary, a formal and focused business volatility analysis could be carried out with users at different levels of the organizational hierarchy so that the abstractions required by the business now and in future (to the foreseeable extent) are identified and the design abstractions are appropriately set up. The design abstractions could range form simple parameterization to more refined OO and variability techniques. The mode of deploying the changes also influences the choice of design technique.  

 

In fact it is a good idea to include a discussion on how the design would be impacted by anticipated and unanticipated changes in the user requirements: would the design abstractions take them in their stride elegantly or would it cause major upheavals. One recalls how in Operations Research, the algorithms provide for sensitivity analysis to figure out the impact on the computed solution if certain conditions were to change. Incidentally an earlier ‘Change Management’ post talks about the sensitivity of effort estimates to changes in user requirements.  

 

Is this a non-issue with packaged solutions like ERP? No, it is still an issue, perhaps to a lesser degree. Configuring a ERP solution for the current business practice is not a trivial effort. And when there are changes to current practice, reflecting these changes could turn out to be a minor or a significant effort depending on the degrees of freedom in the initial lay-out. For instance, consider organizations that frequently reorganize their operations – divisions and departments merge and split, get centralized and decentralized…The ERP could be elegantly re-configured for all these changes or it could be a snake’s pit depending on how it was set up initially.     

 

As an aside, abstractions in the requirements gathering phase may also be necessitated for an entirely different reason – the involved users may not be clear or articulate about their needs at that point of time or the scenario is in some kind of flux. These may get fleshed out later. Design abstractions must be able to cope up with these too. 

 

All along the software architects and the designers were required to think of abstractions. Now are we wanting our Business Analysts also to get into the groove? Yes, that’s the drift. 

 

How do we build systems for businesses which are intrinsically very volatile? Will look at it in a post to follow.

Read Full Post »

Oh, OO

Earlier it was observed that OO paradigm deals with the problem domain and hence a OO design quickly realigns itself if the problem statement changes. How is it done? A OO design achieves it by trying to readily model the real world (as opposed to modeling a solution in the traditional approach) in a given context. This is largely true, but not strictly. A OO design is not the exact replica of the real world (There is a lot of published material on this subject available on the Internet). The reason for this is easy to see. In any application, of course, there are classes corresponding to real-world objects; but they are often out-numbered by implementation classes which may not have readily identifiable real world counter-parts. Examples are boundary classes, an encryption algorithm, etc.

This is not about those implementation classes. It is about the classes modeling the real-world objects. Even with these real-world objects, one ends up with a less optimal design if the classes model them with great fidelity. The breaking away of the OO design from the true real-world objects can be nailed to the basic OO tenet that the objects in OO design need to be empowered as much as possible and as uniformly as allowed by the context (Skew in empowerment will be the subject of another blog). The real-world problem comprises live objects as well as inanimate objects. While the live objects ‘behave’, the inanimate objects have very limited or no behavior at all. Whereas in OO design, even inanimate objects are invested with interesting behaviors as part of empowerment.

For example, consider a Library application. Here, in real-world, one picks up the book from the shelves (or looks up in the electronic catalog). The book is taken to a counter where a library person (or an electronic agent) issues the book after recording the transaction. In the OO design, the responsibility of issuing the book is shifted to the book object itself: ‘the book issues itself’. This is how different objects are empowered to the full. How do we explain this? If the book object was animated in real, perhaps it would behave this way. It would not need a real object (the library person) to get issued! Similarly, an invoice (object) can print itself without anyone’s help or intervention!

Read Full Post »

Oh, OO

I cannot recall where I read, but I liked it much. It was to explain the merit of OO approach. The gist of it was something like this: In the traditional approach (read ‘functional’), the solution to a given problem is figured out and coded. The OO approach tries to model the problem domain directly and the solution flows out of it. Thus there is a greater alignment between the problem stated and the software designed and coded. The software is able to follow the changes in the problem statement more easily and elegantly. This is an important demand on any software development paradigm since changing requirements is the norm in today’s times.

Of course, the mere OO approach does not guarantee this alignment automatically. How does one do a good job of it so that the promise of OO approach is actually delivered where the rubber meets the road? Are there cook-book prescriptions to follow? Presently, the answer is at best a set of guidelines and self-checks and some measures. I will bring up some pieces of it for closer look, going forward.

Read Full Post »