Why Software Design?
The main technique in software design is this: You look at the entirety of your system and you decompose it into pieces that are more manageable and have clear interfaces. This approach is usually referred to as modularization.
But what’s the point of investing such an effort if in the end only the externally visible properties of a software matter?
In his classic paper “On the Criteria To Be Used in Decomposing Systems into Modules”1, David Parnas gives the following rationale:
The benefits expected of modular programming are (1) managerial – development time should be shortened because separate groups would work on each module with little need for communication; (2) product flexibility – it should be possible to make drastic changes to one module without a need to change others; (3) comprehensibility – it should be possible to study the system one module at a time. The whole system can therefore be better designed because it is better understood.
To paraphrase this in today’s terminology, the benefits of modularization are:
(1) To enable independent work on different modules through clear interfaces which reduce communication overhead.
A more sarcastic take on cross-team communication overhead is Conway’s Law, which states that any system designed by an organization will reflect that organization’s communication structure.
(2) To simplify future changes, by aligning the code so that anticipated changes are easy.
Changes are easy when they are contained within a module or crossing few module boundaries. The difficulty is to identify likely future changes and lay out the system accordingly, so that they will be a fit with the design and won’t require changes to the bigger architecture.
As David Parnas says himself in the conclusion to his paper:
We propose instead that one begins with a list of difficult design decisions or design decisions which are likely to change. Each module is then designed to hide such a decision from the others.
Modules should not rely on each other beyond what their interfaces guarantee, so that if you change a module, the change is contained behind its interface and not observable to others.
(3) To increase confidence in functionality, by making reasoning about relevant aspects easier.
The key here is that the interface to each module should give a more abstract guarantee which hides the module’s implementation details. If the right abstraction is picked, the module’s users will be able to reason about it in simpler terms and avoid dealing with details.
In a good modularization, the interface definition is small and the contained functionality is large (a property which John Ousterhout calls “Deep Classes”)2. This minimizes the effort to use a module and maximizes its benefit.
Besides reasoning about behavior, a good system decomposition can also help reasoning about other aspects. For example, it’s easier to reason about crash resilience if different parts of the system are deployed as separate services.
Finally, a suitable system decomposition also simplifies reasoning within a module, which now has a clear contract to fulfill, as defined by the module’s interface.
There is more to be said about modularization techniques and different approaches on how to deal with the uncertainties of changing requirements, but that has to go on a follow-up article. Subscribe to this blog with your RSS reader to get notified of future articles.
David Parnas, On the Criteria To Be Used in Decomposing Systems into Modules (1972) ↩︎