Seamless Object-Oriented Software Architecture
Authors: Kim Wald and Jean-Marc Nerson
Publisher: Prentice Hall
Reviewer: Dan Wilder
If you have some experience with object-oriented programming, and you are looking for a compatible design method, read this book. The authors describe a method (they avoid the term “methodology,” and in Chapter 6 tell us why) based entirely on object-oriented concepts. In use since 1990, Business Object Notation (BON) avoids starting with the familiar data flow, entity-relationship, or state transition diagrams. System use scenarios, prominent in other methods, are found here, but not in a fundamental role. What you will find are classes, featuring inheritance and client relations; clusters, flexible groupings of classes; and objects, that is, the run-time instances of classes. Original and quite sensible graphical and textual notations are described, suitable for garage floor, white board, or CASE tool. The book spends roughly equal time on notation, process, and case studies. Several appendices present condensed information. A nice glossary and a fine bibliography provide the icing on this cake.
Seamless Object-Oriented Software Architecture is the first widely available full-length discussion of BON. For a fresh look at issues of object-oriented software development, the book is worth reading even if you're happy with another method. If not, consider this one. The book is readable; it doesn't get bogged down in minutiae, but it covers a lot of material. Be warned: these authors hit the ground running. If you are not already familiar with object-oriented concepts, start with a more introductory book.
Among BON's key ideas are two I will discuss briefly. First, reduce the conceptual gap between design and implementation. Second, provide means to selectively abstract from the welter of low-level details. The two ideas synthesize well. The resulting model is of a single piece, even while a view of it may range over many different abstraction levels. Hence the use of “seamless”. Take a detailed look at a small piece of the model, in a context of the most abstract view of the rest, and it fits into place perfectly.
The conceptual gap between design and implementation is reduced by eliminating difficult, clumsy, or irreversible transformations from the picture. Data flow diagrams, state transition models, entity-relationship diagrams, and so on, while considered useful for specialized problems, are here dismissed as foundations for a general-purpose method. Rather, the effort is to explore the application of class, object, inheritance, polymorphism, and the software contract, to the higher-level representation of systems.
Abstraction is facilitated by the easy transition between levels of detail in the BON models, and also by the rich semantic content lent to the class interface description by the software contract. This contract is a part of the class interface, spelling out the class requirements and obligations, independent of the program code, which often won't exist when the interface is first described. This use of contract provides real substance in the design, in a way that bubbles and arrows just can't do. It does so in a way that is understandable in a context of the more abstract bubbles and arrows. Zoom out for perspective. Zoom in for detail. And the detail always makes sense in the context of the larger picture. Or else it doesn't, and this tells you either the detail or the picture must be changed! Better to find this out early, before the system is nearly implemented, and changes become much more expensive. A good design method should help you find this sort of thing.
The focus is always on the design of coherent, well thought out classes which embody what you know about some concept or idea. These furnish the basis for software re-use. In the short term, within the scope of their originating project, they are fastened together perhaps more than once, as the definition of the project changes, using relatively transient “glue” classes that give a particular system its shape and particulars. Thus re-use begins at home, and the system is not hedged in by premature rigid definition of what is in many cases the most volatile aspect of a system: its external interface.
The notion of coherent re-usable classes bears some kinship to the traditional Unix “small sharp tools” philosophy, where programs that do one thing well may be combined in unanticipated ways to perform work not contemplated when the tools were written. However, the flexibility of the object-oriented framework is much greater. The key in is having well-focused tools: in the Unix case, binaries like ls and find; for object-oriented programming, classes like LINEAR_ITERATOR or BINARY_TREE. Or perhaps PATIENT_ACCOUNT or STEPPER_MOTOR.
The invention of such classes and their combination with pre-existing classes to form a working system is an incremental process requiring many trips back and forth from high level design through implementation. As in many other methods, you start at a high level, produce a rough cut at a design, then immediately begin implementation. Selected subsystems are targeted, usually not the easiest ones. This provides a reality check on the design. Then, back to the high level to revise by what you have learned, return to implementation, and so on. Implementation throws the cold clear light of day on design, design guides implementation.
From time to time you make a side trip into system use scenarios. These do not direct the organization of the system, but rather test the evolving design. The typical situation: here is something it would be reasonable to do; does this set of classes support the reasonable behavior? Sometimes it doesn't, so you go back and figure out what additional useful ideas might be wrapped in classes. The use scenarios are accompanied by object scenarios, showing the interplay of objects to accomplish the use scenario. A novel graphical notation is used, which allows easy depiction of interactions between many more objects than the conventional ladder or lattice-like interaction diagrams often used elsewhere.
The middle of the book, chapters 6 through 8, discusses the process of system development under BON. Some readers may want to begin reading here, as this part of the book talks a lot more about the “how” and “why” of the method. Nine standard tasks are completed, not necessarily in order, each by some mix of nine standard activities. The tasks, the subject of chapter 7, are:
Delineate system borderline
List candidate classes
Select classes and group into clusters
Further define classes
Sketch system behaviors
Define public features
Complete and review system
The activities, the subject of chapter 8, are:
Defining class features
Selecting and describing object scenarios
Working out contracting conditions
Indexing and documenting
Evolving the system architecture
Each task and activity is discussed at some length. These authors don't just dump a notation on you and leave you adrift; some care has gone into describing just how you might proceed. While emphasizing over and again that satisfactory performance is not subject to pat answers, but rather requires talent, experience, and insight, Waldán and Nerson nonetheless manage to provide what sounds to me like good advice about each of the tasks and activities. In a literature where solutions that are too simple abound (“Model the physical objects,” “Don't use multiple inheritance,” “Encapsulate interface, data, and process in separate classes”) the thoughtful advice in these chapters is welcome.
I'll be bringing you further report in a few months. With the help of the Linux port of EiffelCase, the BON tool from Interactive Software Engineering of Santa Barbara, California, I will attempt a small freeware project using the advice in this book. My success or failure, and the delights or frustrations encountered, will furnish the topic of my next article.
Dan Wilder (firstname.lastname@example.org) writes programs and prose in Seattle, Washington. A buildmeister by day, Linux fanatic and newsgroup surfer by night, he also finds time to get outdoors, play with his two darling children, and pick apples.
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
|Android's Limits||Jun 04, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Introduction to MapReduce with Hadoop on Linux
- Senior Perl Developer
- Technical Support Rep
- Weechat, Irssi's Little Brother
- UX Designer
- One Tail Just Isn't Enough
- Android's Limits
- Reply to comment | Linux Journal
42 min 29 sec ago
- Reply to comment | Linux Journal
42 min 56 sec ago
- Replica Watches
3 hours 7 min ago
- Reply to comment | Linux Journal
7 hours 18 min ago
- on the path to understanding
7 hours 22 min ago
- As a fisher,we know that a
1 day 2 hours ago
- All I Say Is Worth Share!
1 day 3 hours ago
1 day 4 hours ago
1 day 7 hours ago
- You should consider visiting
1 day 8 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?