Thursday, June 25, 2009

Reusability

                Reusability is the ability of software elements to serve for the construction of many different applications.The need for reusability comes from the observation that software systems often follow similar patterns; it should be possible to exploit this commonality and avoid reinventing solutions to problems that have been encountered before. By capturing such a pattern, a reusable software element will be applicable to many different developments.

                Reusability has an influence on all other aspects of software quality, for solving the reusability problem essentially means that less software must be written, and hence that more effort may be devoted to improving the other factors, such as correctness and robustness.the more autonomous the modules, the higher the likelihood that a simple change will affect just one module, or a small number of modules, rather than triggering off a chain reaction of changes over the whole system.

             Here again is an issue that the traditional view of the software lifecycle had not properly recognized, and for the same historical reason.you must find ways to solve one problem before you worry about applying the solution to other problems.But with the growth of software and its attempts to become a true industry the need for reusability has become a pressing concern.A simple architecture will always be easier to adapt to changes than a complex one.
                        
          Reusability will play a central role in the discussions of the following chapters, one of which is in fact devoted entirely to an in-depth examination of this quality factor, its concrete benefits, and the issues it raises.The object-oriented method is, before anything else, a system architecture method which helps designers produce systems whose structure remains both simple and decentralized.

Programming Environments

                   It is usable in many operating environments, for many sets of data.To become a generally usable programming product, a program must be written in ageneralized fashion. In particular the range and form of inputs must be generalized as much as the basic algorithm will reasonably allow. Then the program must be thoroughly tested, so that it can be depended upon. 

                          This means that a substantial bank of test cases, exploring the input range and probing its boundaries, must be prepared, run, and recorded. Finally, promotion of a program to a programming products.It requires its thorough documentation, so that anyone may use it, fix it, and extend it. As a rule of thumb,I estimate that a programming product costs at least three times as much as a debugged program with the same function.

                     Moving across the vertical boundary, a program becomes a component in a programming system. This is a collection of interacting programs, coordinated in function and disciplined in format.So that the assemblage constitutes an entire facility for large tasks.To become a programming system component, a program must be written so that every input and output conforms in syntax and semantics with precisely defined interfaces.

                    The program must also be designed so that it uses only a prescribed budget of  resources memory space, input output devices, computer time. Finally, the program must be tested with other system components, in all expected combinations. This testing must be extensive, for the number of cases grows combinatorially.It is time consuming, for subtle bugs arise from unexpected interactions of debugged components.

Robustness

                Robustness is the ability of software systems to react appropriately to abnormal conditions.Robustness complements correctness.Correctness addresses the behavior of a system incases covered by its specification; robustness characterizes what happens outside of that specification.As reflected by the wording of its definition, robustness is by nature a more fuzzy notion than correctness. 

                       Since we are concerned here with cases not covered by the specification, it is not possible to say, as with correctness, that the system should “perform its tasks” in such a case; were these tasks known, the abnormal case would become part of the specification and we would be back in the province of correctness. This definition of “abnormal case” will be useful again when we study exception handling.

                   It implies that the notions of normal and abnormal case are always relative to a certain specification; an abnormal case is simply a case that is not covered by the specification. If you widen the specification, cases that used to be abnormal become normal even if they correspond to events such as erroneous user input that you would prefer not to happen. “Normal” in this sense does not mean “desirable”, but simply “planned for in the design of the software”.

                Although it may seem paradoxical at first that erroneous input should be called a normal case, any other approach would have to rely on subjective criteria, and so would be useless.There will always be cases that the specification does not explicitly address. The role of the robustness requirement is to make sure that if such cases do arise, the system does not cause catastrophic events; it should produce appropriate error messages, terminate its execution cleanly, or enter a so-called “graceful degradation” mode.

The Programming

      One occasionally reads newspaper accounts of how two programmers in a remodeled garage have built an important program that surpasses the best efforts of large teams. And every programmer is prepared to believe such tales, for he knows that he could build any program.

            It is much faster than the thousand statements year reported for industrial teams.Why then have not all industrial programming teams been replaced by dedicated garage duos One must look at what is being produce a program.

      Ready to be run by the author on the system on which it was developed.That is the thing commonly produced in garages, and that is the object the individual programmer uses in estimating productivity.There are two ways a program can be converted into a more useful, but more costly, object.

      These two ways are represented by the boundaries in the diagram.Moving down across the horizontal boundary, a program becomes a programming product. This is a program that can be run,tested,repaired and extended by anybody.

Software Maintanence

                 Maintenance is what happens after a software product has been delivered.Discussions of software methodology tend to focus on the development phase.so do introductory programming courses. But it is widely estimated that seventy percentage of the cost of software is devoted to maintenance. No study of software quality can be satisfactory if it neglects this aspect.what is not acceptable is to have the knowledge of the exact length of the data plastered all across the program, so that changing that length will cause program changes of a magnitude out of proportion with the conceptual size of the specification change.

               A minute’s reflection shows this term to be a misnomer.A software product does not wear out from repeated usage, and thus need not be “maintained” the way a car or a TV set does. In fact, the word is used by software people to describe some noble and some not so noble activities. The noble part is modification.As the specifications of computer systems change, reflecting changes in the external world, so must the systems themselves. The less noble part is late debugging.Removing errors that should never have been there in the first place.

                                      More than two-fifths of the cost is devoted to user-requested extensions and modifications. This is what was called above the noble part of maintenance, which is also the inevitable part. The unanswered question is how much of the overall effort the industry could spare if it built its software from the start with more concern for extendibility. We may legitimately expect object technology to help.

                                            This is inevitable since the data must eventually be accessed for internal handling. But with traditional design techniques this knowledge is spread out over too many parts of the system, causing unjustifiably large program changes if some of the physical structure changes as it inevitably will.In other words, if postal codes go from five to nine digits,or dates require one more digit, it is reasonable to expect that a program manipulating the codes or the dates will need to be adapted. 

Portability

            Portability is the ease of transferring software products to various hardware and software environments.Portability addresses variations not just of the physical hardware but more generally of the hardware-software machine, the one that we really program, which includes the operating system, the window system if applicable, and other fundamental tools. In the rest of this book the word “platform” will be used to denote a type of hardware-software machine; an example of platform is “Intel X86 with Windows NT” .

              Many of the existing platform incompatibilities are unjustified, and to a naïve observer the only explanation sometimes seems to be a conspiracy to victimize humanity in general and programmers in particular. Whatever its causes, however, this diversity makes portability a major concern for both developers and users of software. No one likes to wait for the responses of an interactive system, or to have to purchase more memory to run a program.

           Efficiency must be balanced with other goals such as extendibility and reusability; extreme optimizations may make the software so specialized as to be unfit for change and reuse. Furthermore, the ever growing power of computer hardware does allow us to have a more relaxed attitude about gaining the last byte or microsecond.if the final system is so slow or bulky as to impede usage, those who used to declare that “speed is not that important” will not be the last to complain.

               The bottom curve is all too common: in the hectic race to add more features, the development loses track of the overall quality. The final phase, intended to get things right at last, can be long and stressful. If, under users’ or competitors’ pressure, you are forced to release the product early at stages marked by black squares in the figure the outcome may be damaging to your reputation.

Information Hiding

                     When writing a class, you will sometimes have to include a feature which the class needs for internal purposes only: a feature that is part of the implementation of the class, but not of its interface. Others features of the class possibly available to clients may call the feature for their own needs; but it should not be possible for a client to call it directly.In object oriented computation, there is only one basic computational mechanism: given a certain object, which is always an instance of some class,call a feature of that class on that object.

                         The mechanism which makes certain features unfit for clients’ calls is called information hiding.It is essential to the smooth evolution of software systems.In practice, it is not enough for the information hiding mechanism to support exported features and secret features.Class designers must also have the ability to export a feature selectively to a set of designated clients.They describe the effect of features on objects, independently of how the features have been implemented. 

                               It should be possible for the author of a class to specify that a feature is available to all clients, to no client, or to specified clients.An immediate consequence of this rule is that communication between classes should be strictly limited. In particular, a good object oriented language should not offer any notion of global variable; classes will exchange information exclusively through feature calls, and through the inheritance mechanism.

                               The language should make it possible to equip a class and its features with assertions, relying on tools to produce documentation out of these assertions and, optionally, monitor them at run time.Assertions have three major applications: they help produce reliable software; they provide systematic documentation; and they are a central tool for testing and debugging object-oriented software.

Functionality

                   Functionality is the extent of possibilities provided by a system.One of the most difficult problems facing a project leader is to know how much functionality is enough. The pressure for more facilities, known in industry parlance as featurism is constantly there. Its consequences are bad for internal projects, where the pressure comes from users within the same company, and worse for commercial products, as the most prominent part of a journalist’s comparative review is often the table listing side by side the features offered by competing products.

                    Featurism is actually the combination of two problems, one more difficult than the other. The easier problem is the loss of consistency that may result from the addition of new features, affecting its ease of use. Users are indeed known to complain that all the “bells and whistles” of a product’s new version make it horrendously complex. Such comments should be taken with a grain of salt, however, since the new features do not come out of nowhere: most of the time they have been requested by users other users.What to me looks like a superfluous trinket may be an indispensable facility to you.

             The solution here is to work again and again on the consistency of the overall product, trying to make everything fit into a general mold. A good software product is based on a small number of powerful ideas; even if it has many specialized features,they should all be explainable as consequences of these basic concepts. The “grand plan” must be visible, and everything should have its place in it.

                 This method is tougher to enforce on a day-to-day basis because of the pressures mentioned, but yields a more effective software process and often a better product in the end. Even if the final result is the same, as assumed in the figure, it should be reached sooner.Following the suggested path also means that the decision to release an early version at one of the points marked by colored squares in the figure becomes, if not easier, at least simpler.It will be based on your assessment of whether what you have so far covers a large enough share of the full feature set to attract prospective customers rather than drive them away.

Joys of the Craft

        A programming system component costs at least three times as much as a stand alone program of the same function. The cost may be greater if the system has many components. In the lower right hand corner of  stands the programming systems product. This differs from the simple program in all of the above ways. It costs nine times as much. But it is the truly useful object.

          First is the sheer joy of making things. As the child delights in his mud pie, so the adult enjoys building things, especially things of his own design. I think this delight must be an image of God's delight in making things, a delight shown in the distinctness and newness of each leaf and each snowflake. The intended product of most system programming efforts. 

         Second is the pleasure of making things that are useful to other people. Deep within, we want others to use our work and to find it helpful. In this respect the programming system is not essentially different from the child's first clay pencil holder "for Daddy's office."Third is the fascination of fashioning complex puzzle like objects of interlocking moving parts and watching them work in subtle cycles,playing out the consequences of principles built in from the beginning. 

        The programmed computer has all the fascination of the pinball machine or the jukebox mechanism, carried to the ultimate. Fourth is the joy of always learning, which springs from the non repeating nature of the task. In one way or another the problem is ever new, and its solver learns something  sometimes practial, sometimes theoretical, and sometimes both.

Extendibility

Extendibility is the ease of adapting software products to changes of specification.Software is supposed to be soft, and indeed is in principle; nothing can be easier than to change a program if you have access to its source code. Just use your favorite text editor. The problem of extendibility is one of scale. For small programs change is usually not a difficult issue; but as software grows bigger, it becomes harder and harder to adapt.

         A large software system often looks to its maintainers as a giant house of cards in which pulling out any one element might cause the whole edifice to collapse. We need extendibility because at the basis of all software lies some human phenomenon and hence fickleness. The obvious case of business software,where passage of a law or a company’s acquisition may suddenly invalidate the assumptions on which a system rested, is not special; even in scientific computation, where we may expect the laws of physics to stay in place from one month to the next, our way of understanding and modeling physical systems will change.

             Traditional approaches to software engineering did not take enough account of change, relying instead on an ideal view of the software lifecycle where an initial analysis stage freezes the requirements, the rest of the process being devoted to designing and building a solution. This is understandable: the first task in the progress of the discipline was to develop sound techniques for stating and solving fixed problems, before we could worry about what to do if the problem changes while someone is busy solving it. 

               But now with the basic software engineering techniques in place it has become essential to recognize and address this central issue. Change is pervasive in software development:change of requirements, of our understanding of the requirements, of algorithms, of data representation, of implementation techniques. Support for change is a basic goal of object technology and a running theme through this book.

Software Quality

                      Engineering seeks quality; software engineering is the production of quality software.This book introduces a set of techniques which hold the potential for remarkable improvements in the quality of software products.Before studying these techniques, we must clarify their goals. Software quality is best described as a combination of several factors. This chapter analyzes some of these factors, shows where improvements are most sorely needed, and points to the directions where we shall be looking for solutions in the rest of our journey.


                  We all want our software systems to be fast, reliable, easy to use, readable, modular, structured and so on. But these adjectives describe two different sorts of qualities.On one side, we are considering such qualities as speed or ease of use, whose presence or absence in a software product may be detected by its users. These properties may be called external quality factors.

              Under “users” we should include not only the people who actually interact with the final products, like an airline agent using a flight reservation system, but also those who purchase the software or contract out its development, like an airline executive in charge of acquiring or commissioning flight reservation systems. So a property such as the ease with which the software may be adapted to changes of specifications defined later in this discussion as extendibility falls into the category of external factors even though it may not be of immediate interest to such “end users” as the reservations agent.

             Other qualities applicable to a software product, such as being modular, or readable, are internal factors, perceptible only to computer professionals who have access to the actual software text.In the end, only external factors matter. If I use a Web browser or live near a computer-controlled nuclear plant, little do I care whether the source program is readable or modular if graphics take ages to load, or if a wrong input blows up the plant. But the key to achieving these external factors is in the internal ones: for the users to enjoy the visible qualities, the designers and implementers must have applied internal techniques that will ensure the hidden qualities.

Programming Systems

        No scene from prehistory is quite so vivid as that of the mortal struggles of great beasts in the tar pits. In the mind's eye one sees dinosaurs, mammoths, and sabertoothed tigers struggling against the grip of the tar. The fiercer the struggle, the more entangling the tar.  

           Large system programming has over the past decade been such a tar pit, and many great and powerful beasts have thrashed violently in it. Most have emerged with running systems few have met goals, schedules, and budgets. Large and small, massiveor wiry, team after team has become entangled in the tar.

           No onething seems to cause the difficulty any particular paw can be pulled away. But the accumulation of simultaneous and interacting factors brings slower and slower motion. Everyone seems to have been surprised by the stickiness of the problem, and it is hard to discern the nature of it. 

          But we must try to understand it if weteam after team has become entangled in the tar are to solve it. Therefore let us begin by identifying the craft of system programming and the joys and woes inherent in it.beast is so strong or so skillful but that he ultimately sinks.

Timeliness

               Timeliness is the ability of a software system to be released when or before its users want it.Timeliness is one of the great frustrations of our industry. A great software product that appears too late might miss its target altogether. This is true in other industries too, but few evolve as quickly as software.Method promotes a common design style and standardized module and system interfaces, which help produce systems that will work together.

                        Timeliness is still, for large projects, an uncommon phenomenon. When Microsoft announced that the latest release of its principal operating system, several years in the making, would be delivered one month early, the event was newsworthy enough to make the frontpage headline of ComputerWorld.O-O techniques enable those who master them to produce software faster and at less cost; they facilitate addition of functions, and may even of themselves suggest new functions to add.

                 The contribution of O-O tools to modern interactive systems and especially their user interfaces is well known, to the point that it sometimes obscures other aspects.Although the extra power or object-oriented techniques at first appears to carry a price, relying on professional-quality reusable components can often yield considerable performance improvements.Because of this closeness of correctness and robustness issues, it is convenient to use a more general term, reliability, to cover both factors.

                   With its emphasis on abstraction and information hiding, object technology encourages designers to distinguish between specification and implementation properties, facilitating porting efforts. The techniques of polymorphism and dynamic binding will even make it possible to write systems that automatically adapt to various components of the hardware software machine, for example different window systems or different database management systems.

Efficiency

             Efficiency is the ability of a software system to place as few demands aspossible on hardware resources, such as processor time, space occupied in internal and external memories, bandwidth used in communication devices.Almost synonymous with efficiency is the word “performance”. The software community shows two typical attitudes towards efficiency.Some developers have an obsession with performance issues, leading them to devote a lot of efforts to presumed optimizations.But a general tendency also exists to downplay efficiency concerns, as evidenced by such industry lore as “make it right before you make it fast” and “next year’s computer model is going to be fifity percent faster anyway”.

            This issue reflects what I believe to be a major characteristic of software engineering, not likely to move away soon: software construction is difficult precisely because it requires taking into account many different requirements, some of which, such as correctness, are abstract and conceptual, whereas others, such as efficiency, are concrete and bound to the properties of computer hardware.For some scientists, software development is a branch of mathematics; for some engineers, it is a branch of applied technology.

                      In reality, it is both. The software developer must reconcile the abstract concepts with their concrete implementations, the mathematics of correct computation with the time and space constraints deriving from physical laws and from limitations of current hardware technology. This need to please the angels as well as the beasts may be the central challenge of software engineering.An in-flight computer must be prepared to detect and process a message from the throttle sensor fast enough to take corrective action.

               The concern for efficiency will be there throughout. Whenever the discussion presents an object-oriented solution to some problem, it will make sure that the solution is not just elegant but also efficient; whenever it introduces some new O-O mechanism, be it garbage collection, dynamic binding, genericity or repeated inheritance, it will do so based on the knowledge that the mechanism may be implemented at a reasonable cost in time and in space.Efficiency is only one of the factors of quality; we should not let it rule our engineering lives. But it is a factor, and must be taken into consideration, whether in the construction of a software system or in the design of a programming language. If you dismiss performance, performance will dismiss you

Ease Of Use

               Ease of use is the ease with which people of various backgrounds and qualifications can learn to use software products and apply them to solve problems.It also covers the ease of installation, operation and monitoring.The definition insists on the various levels of expertise of potential users. This requirement poses one of the major challenges to software designers preoccupied with ease of use.How to provide detailed guidance and explanations to novice users, without bothering expert users who just want to get right down to business.
               
one of the keys to ease of use is structural simplicity. A well designed system, built according to a clear, well thought out structure, will tend to be easier to learn and use than a messy one. The condition is not sufficient, of course but it helps considerably.This is one of the areas where the object-oriented method is particularly productive; many O-O techniques, which appear at first to address design and implementation, also yield powerful new interface ideas that help the end users.

              Software designers preoccupied with ease of use will also be well-advised to consider with some mistrust the precept most frequently quoted in the user interface literature, from an early article by Hansen know the user. The argument is that a good designer must make an effort to understand the system’s intended user community. This view ignores one of the features of successful systems: they always outgrow their initial audience.A system designed for a specific group will rely on assumptions that simply do not hold for a larger audience.

                  Good user interface designers follow a more prudent policy. They make as limited assumptions about their users as they can. When you design an interactive system, you may expect that users are members of the human race and that they can read, move a mouse, click a button, and type not much more. If the software addresses a specialized application area, you may perhaps assume that your users are familiar with its basic concepts But even that is risky.

Woes Of Craft

         First, one must perform perfectly. The computer resembles the magic of legend in this respect too. If one character, one pause, of the incantation is not strictly in proper form, the magic doesn't work. Human beings are not accustomed to being perfect, and few areas of human activity demand it. Adjusting to the requirement for perfection is, I think, the most difficult part of learning to program.

       Next, other people set one's objectives, provide one's resources, and furnish one's information. One rarely controls the circumstances of his work, or even its goal. In management terms, one's authority is not sufficient for his responsibility. It seems that in all fields, however, the jobs where things get done never have formal authority commensurate with responsibility. In practice, actual  authority is acquired from the very momentum of accomplishment.

          The dependence upon others has a particular case that is especially painful for the system programmer. He depends upon other people's programs. These are often maldesigned, poorly implemented, incompletely delivered, and poorly documented. So he must spend hours studying and fixing things that in an ideal world would be complete, available, and usable.The next woe is that designing grand concepts is fun; finding nitty little bugs is just work. With any creative activity come dreary hours of tedious, painstaking labor, and programming is no exception. 

      Next, one finds that debugging has a linear convergence, or worse, where one somehow expects a quadratic sort of approach to the end. So testing drags on and on, the last difficult bugs taking more time to find than the first.The last woe, and sometimes the last straw, is that the product over which one has labored so long appears to be obsolete upon completion. Already colleagues and competitors are in hot pursuit of new and better ideas. Already the displacement of one's thought-child is not only conceived, but scheduled. 

Correctness

                    Correctness is the ability of software products to perform their exact tasks, as defined by their specification.It is the prime quality. If a system does not do what it is supposed to do,everything else about it whether it is fast, has a nice user interface  matters little.But this is easier said than done. Even the first step to correctness is already difficult. we must be able to specify the system requirements in a precise form, by itself quite a challenging task.

                   Methods for ensuring correctness will usually be conditional. A serious software system, even a small one by today’s standards, touches on so many areas that it would be impossible to guarantee its correctness by dealing with all components and properties on a single level. Instead, a layered approach is necessary, each layer relying on lower ones.Many practitioners, when presented with the issue of software correctness, think about testing and debugging.

                  In the conditional approach to correctness, we only worry about guaranteeing that each layer is correct on the assumption that the lower levels are correct.This is the only realistic technique, as it achieves separation of concerns and lets us concentrate at each stage on a limited set of problems. You cannot usefully check that a program in a highlevel language X is correct unless you are able to assume that the compiler on hand implements X correctly. 

                This does not necessarily mean that you trust the compiler blindly, simply that you separate the two components of the problem.compiler correctness, and correctness of your program relative to the language’s semantics. In the method described in this book, even more layers intervene: software development will rely on libraries of reusable components, which may be used in many different applications.

Compatibility

              Compatibility is the ease of combining software elements with others.Compatibility is important because we do not develop software elements in a vacuum.they need to interact with each other. But they too often have trouble interacting because they make conflicting assumptions about the rest of the world. An example is the wide variety of incompatible file formats supported by many operating systems.

        A program can directly use another’s result as input only if the file formats are compatible.The key to compatibility lies in homogeneity of design, and in agreeing on standardized conventions for inter-program communication. Standardized file formats, as in the Unix system, where every text file is simply a sequence of characters.Standardized data structures, as in Lisp systems, where all data, and programs as well, are represented by binary trees.

          Standardized user interfaces, as on various versions of Windows, OS/2 and  macOS,where all tools rely on a single paradigm for communication with the user, based on standard components such as windows, icons, menus etc.More general solutions are obtained by defining standardized access protocols to all important entities manipulated by the software. This is the idea behind abstract data types and the object-oriented approach, as well as so-called middleware protocols such as CORBA and Microsoft’s OLE-COM (ActiveX).

        In particular typing and assertions, meant to help build software that is correct from the start rather than debugging it into correctness. Debugging and testing remain indispensable, of course, as a means of double-checking the result.It is possible to go further and take a completely formal approach to software construction. This book falls short of such a goal, as suggested by the somewhat timid terms “check”, “guarantee” and “ensure” used above in preference to the word “prove”.

Exception handling

                             Abnormal events may occur during the execution of a software system. In object oriented computation, they often correspond to calls that cannot be executed properly, as a result of a hardware malfunction, of an unexpected impossibility or of a bug in the software.To produce reliable software, it is necessary to have the ability to recover from such situations. This is the purpose of an exception mechanism.

                             In the society of software systems, as you may have guessed, the exception mechanism is the third branch of government, the judicial system When the execution of a software system causes the call of a certain feature on a certain object.To provide such a guarantee of correct execution, the language must be typed. This means that it enforces a few compatibility rules; in particular. Feature call should be the primary computational mechanism. 

                         Every entity is explicitly declared as being of a certain type, derived from a class. Every feature call on a certain entity uses a feature from the corresponding class. Assignment and argument passing are subject to conformance rules, based on inheritance, which require the source’s type to be compatible with the target’s type.The language should provide a mechanism to recover from unexpected abnormal situations.

                          In a language that imposes such a policy, it is possible to write a static type checker which will accept or reject software systems, guaranteeing that the systems it accepts will not cause any “feature not available on object” error at run time.A well-defined type system should, by enforcing a number of type declaration and compatibility rules, guarantee the run-time type safety of the systems it accepts.

Thursday, June 18, 2009

Interfaces

                  Every module should communicate with as few others as possible.The Small Interfaces or “Weak Coupling” rule relates to the size of intermodule connections rather than to their number.If two modules communicate, they should exchange as little information as possible.The Small Interfaces requirement follows in particular from the criteria of continuity and protection.Fortran practice which some readers will recognize the “garbage common block”. A common block in Fortran is a directive of the form.

                    The problem, of course, is that every module may also misuse the common data, and hence that modules are tightly coupled to each other the problems of modular continuity and protection are particularly nasty.This time honored technique has nevertheless remained a favorite  no doubt accounting for many a late night debugging session.

                    Developers using languages with nested structures can suffer from similar troubles.With block structure as introduced by Algol and retained in a more restricted form by Pascal,it is possible to include blocks, delimited by begin quater end pairs, within other blocks.In addition every block may introduce its own variables, which are only meaningful within the syntactic scope of the block.

                     Behind this rule stand the criteria of decomposability and composability, continuity and understandability.With block structure,the equivalent of the Fortran garbage common block is the practice of declaring all variables at the topmost level.Block structure, although an ingenious idea,introduces many opportunities to violate the Small Interfaces rule.

Modular protection

                       A method satisfies Modular Protection if it yields architectures in which the effect of an abnormal condition occurring at run time in a module will remain confined to that module, or at worst will only propagate to a few neighboring modules.The underlying issue, that of failures and errors, is central to software engineering. The errors considered here are run-time errors, resulting from hardware failures, erroneous input or exhaustion of needed resources. 

                     The criterion does not address the avoidance or correction of errors, but the aspect that is directly relevant to modularity their propagation.Languages such as PL/I, CLU, Ada,C++ and Java support the notion of exception. An exception is a special signal that may be “raised” by a certain instruction and “handled” in another, possibly remote part of the system. When the exception is raised, control is transferred to the handler. Such facilities make it possible to decouple the algorithms for normal cases from the processing of erroneous cases. But they must be used carefully to avoid hindering modular protection.

               The designer of every module must select a subset of the module’s properties as the official information about the module, to be made available to authors of client modules.the whole text of the module itself could serve as the description.It provides a correct view of the module since it is the module.The Information Hiding rule states that this should not in general be the case: the description should only include some of the module’s properties. The rest should remain non-public or secret. Instead of public and secret properties, one may also talk of exported and private properties. The public properties of a module are also known as the interface of the module.

                           The fundamental reason behind the rule of Information Hiding is the continuity criterion. Assume a module changes, but the changes apply only to its secret elements,leaving the public ones untouched,then other modules who use it, called its clients, will not be affected.The smaller the public part, the higher the chances that changes to the module will indeed be in the secret part.

Modular Continuity

                            A method satisfies Modular Continuity if, in the software architectures that it yields, a small change in a problem specification will trigger a change of just one module, or a small number of modules.This criterion is directly connected to the general goal of extendibility.Change is an integral part of the software construction process. The requirements will almost inevitably change as the project progresses. Continuity means that small changes should affect individual modules in the structure of the system, rather than the structure itself.
  
             The term “continuity” is drawn from an analogy with the notion of a continuous function in mathematical analysis. A mathematical function is continuous if a small change in the argument will yield a proportionally small change in the result. Here the function considered is the software construction method, which you can view as a mechanism for obtaining systems from specifications.

                    This mathematical term only provides an analogy, since we lack formal notions of size for software. More precisely, it would be possible to define a generally acceptable measure of what constitutes a “small” or “large” change to a program but doing the same for the specifications is more of a challenge. If we make no pretense of full rigor, however,the concepts should be intuitively clear and correspond to an essential requirement on any modular method.

                          Another rule states that a single notation should be available to obtain the features of an object whether they are represented as data fields or computed on demand.A method in which program designs are patterned after the physical implementation of data will yield designs that are very sensitive to slight changes in the environment.

Modular Understandability

                       A method favors Modular Understandability if it helps produce software in which a human reader can understand each module without having to know the others, or, at worst, by having to examine only a few of the others.The importance of this criterion follows from its influence on the maintenance process.Most maintenance activities, whether of the noble or not so noble category, involve having to dig into existing software elements.

                A method can hardly be called modular if a reader of the software is unable to understand its elements separately.This criterion like the others, applies to the modules of a system description at any level is analysis,design,implementation.the modular understandability criterion will help us address two important questions: how to document reusable components and how to index reusable components so that software developers can retrieve them conveniently through queries.

         The criterion suggests that information about a component, useful for documentation or for retrieval, should whenever possible appear in the text of the component itself tools for documentation, indexing or retrieval can then process the component to extract the needed pieces of information. Having the information included in each component is preferable to storing it elsewhere, for example in a database of information about components.

                   Assume some modules have been so designed that they will only function correctly if activated in a certain prescribed order.for example, B can only work properly if you execute it after A and before C,perhaps because they are meant for use in “piped” form as in the Unix notation encountered earlier.Then it is probably hard to understand B without understanding A and C too.

Modular Composability

              A method satisfies Modular Composability if it favors the production of software elements which may then be freely combined with each other to produce new systems, possibly in an environment quite different from the one in which they were initially developed.Where decomposability was concerned with the derivation of subsystems from overall systems composability addresses the reverse process.Extracting existing software elements from the context for which they were originally designed so as to use them again in different contexts.
            
                  A modular design method should facilitate this process by yielding software elements that will be sufficiently autonomous sufficiently independent from the immediate goal that led to their existence as to make the extraction possible.Composability is directly connected with the goal of reusability the aim is to find ways to design software elements performing well defined tasks and usable in widely different contexts. 

                     This criterion reflects an old dream: transforming the software design process into a construction box activity, so that we would build programs by combining standard prefabricated elements.Composability is independent of decomposability. In fact, these criteria are often at odds. Top-down design, for example, which we saw as a technique favoring decomposability tends to produce modules that are not easy to combine with modules coming from other sources. 

             This is because the method suggests developing each module to fulfill a specific requirement, corresponding to a sub problem obtained at some point in the refinement process. Such modules tend to be closely linked to the immediate context that led to their development, and unfit for adaptation to other contexts. The method provides neither hints towards making modules more general than immediately required nor any incentives to do so.It helps neither avoid nor even just detect commonalities or redundancies between modules obtained in different parts of the hierarchy. 

Modular decomposability

                     A software construction method satisfies Modular Decomposability if it helps in the task of decomposing a software problem into a small number of less complex subproblems, connected by a simple structure, and independent enough to allow further work to proceed separately on each of them.The process will often be self repeating since each subproblem may still be complex enough to require further decomposition.A corollary of the decomposability requirement is division of labor.Once you have decomposed a system into subsystems you should be able to distribute work on these subsystems among different people or groups.

               The most obvious example of a method meant to satisfy the decomposability criterion is top down design. This method directs designers to start with a most abstract description of the system’s function, and then to refine this view through successive steps,decomposing each subsystem at each step into a small number of simpler subsystems until all the remaining elements are of a sufficiently low level of abstraction to allow direct implementation.


                        A typical counter example is any method encouraging you to include in each software system that you produce a global initialization module. Many modules in a system will need some kind of initialization actions such as the opening of certain files or the initialization of certain variables, which the module must execute before it performs its first directly useful tasks. It may seem a good idea to concentrate all such actions, for all modules of the system, in a module that initializes everything for everybody.Such a module will exhibit good “temporal cohesion” in that all its actions are executed at the same stage of the system’s execution. But to obtain this temporal cohesion the method would endanger the autonomy of modules.

                 You will have to grant the initialization module authorization to access many separate data structures, belonging to the various modules of the system and requiring specific initialization actions. This means that the author of the initialization module will constantly have to peek into the internal data structures of the other modules, and interact with their authors. This is incompatible with the decomposability criterion.In the object oriented method every module will be responsible for the initialization of its own data structures.

Persistence And Documentation

                       Many applications, perhaps most will need to conserve objects from one session to the next. The environment should provide a mechanism to do this in a simple way.An object will often contain references to other objects; since the same may be true of these objects, this means that every object may have a large number of dependent objects with a possibly complex dependency graph.It would usually make no sense to store or retrieve the object without all its direct and indirect dependents.A persistence mechanism which can automatically store an objects dependents along with the object is said to support persistence closure.

                    A persistent storage mechanism supporting persistence closure should be available to store an object and all its dependents into external devices and to retrieve them in the same or another session.For some applications, mere persistence support is not sufficient.Such applications will need full database support which also explores other persistent issues such as schema evolution the ability to retrieve objects safely even if the corresponding classes have changed.

               Developers of classes and systems must provide management, customers and other developers with clear  high level descriptions of the software they produce. They need tools to assist them in this effort as much as possible of the documentation should be produced automatically from the software texts. Assertions as already noted help make such software extracted documents precise and informative.              

                 This puts on the environment the burden of providing developers with tools to examine a class text.Find its dependencies on other classes and switch rapidly from one class text to another.This task is called browsing.Typical facilities offered by good browsing tools include find the clients,suppliers,descendants,ancestors of a class.Find all the redefinitions of a feature and the original declaration of a redefined feature.

Software Updates

                 Software development is an incremental process. Developers do not commonly write thousands of lines at a time.They proceed by addition and modification, starting most of the time from a system that is already of substantial size.When performing such an update, it is essential to have the guarantee that the resulting system will be consistent. For example, if you change a feature f of class C, you must be certain that every descendant of C which does not redefine F will be updated to have the new version of F, and that every call to f in a client of C or of a descendant of C will trigger the new version.

                   Conventional approaches to this problem are manual forcing the developers to record all dependencies, and track their changes, using special mechanisms known as “make files” and “include files”. This is unacceptable in modern software development especially in the object oriented world where the dependencies between classes resulting from the client and inheritance relations are often complex but may be deduced from a systematic examination of the software text.
 
                        System updating after a change should be automatic, the analysis of interclass dependencies being performed by tools, not manually by developers.It is possible to meet this requirement in a compiled environment in an interpreted environment or in one combining both of these language implementation techniques.In practice,the mechanism for updating the system after some changes should not only be automatic  it should also be fast.

                      The time to process a set of changes to a system, enabling execution of the updated version, should be a function of the size of the changed components independent of the size of the system as a whole.Here too both interpreted and compiled environments may meet the criterion although in the latter case the compiler must be incremental. Along with an incremental compiler,the environment may of course include a global optimizing compiler working on an entire system as long as that compiler only needs to be used for delivering a finalproduct development will rely on the incremental compiler.

Memory Management And Garbage Collection

                    The last point on our list of method and language criteria may at first appear to belong more properly to the next category implementation and environment. In fact it belongs to both. But the crucial requirements apply to the language; the rest is a matter of good engineering.Object oriented systems,even more than traditional programs tend to create many objects with sometimes complex interdependencies.

                     A policy leaving developers in charge of managing the associated memory, especially when it comes to reclaiming the space occupied by objects that are no longer needed, would harm both the efficiency of the development process, as it would complicate the software and occupy a considerable part of the developers time and the safety of the resulting systems as it raises the risk of improper recycling of memory areas.

                 In a good object oriented environment memory management will be automatic, under the control of the garbage collector, a component of the runtime system.The reason this is a language issue as much as an implementation requirement is that a language that has not been explicitly designed for automatic memory management will often render it impossible.
               
               This is the case with languages where a pointer to an object ofa certain type may disguise itself as a pointer of another type or even as an integer making it impossible to write a safe garbage collector.The language should make safe automatic memory management possible and the implementation should provide an automatic memory manager taking care of garbage collection.

Dynamic Binding And Runtime Type Interrogation

                    The combination of the last two mechanisms mentioned, redefinition and polymorphism immediately suggests the next one. Assume a call whose target is a polymorphic entity.For example,a call to the feature turn on an entity declared of type BOAT. The various descendants of BOAT may have redefined the feature in various ways. Clearly there must be an automatic mechanism to guarantee that the version of turn will always be the one deduced from the actual object’s type regardless of how the entity has been declared. This property is called dynamic binding.


                 Calling a feature on an entity should always trigger the feature corresponding to the type of the attached run time object which is not necessarily the same in different executions of the call.Dynamic binding has a major influence on the structure of object oriented applications as it enables developers to write simple calls to denote what is actually several possible calls depending on the corresponding run time situations.This avoids the need for many of the repeated tests  which plague software written with more conventional approaches.

 
                    Object oriented software developers soon develop a healthy hatred for any style of computation based on explicit choices between various types for an object. Polymorphism and dynamic binding provide a much preferable alternative. In some cases however an object comes from the outside so that the software author has no way to predict its type with certainty.This occurs in particular if the object is retrieved from external storage received from a network transmission or passed by some other system.

                  The software then needs a mechanism to access the object in a safe way, without violating the constraints of static typing. Such a mechanism should be designed with care so as not to cancel the benefits of polymorphism and dynamic binding.The assignment attempt operation described in this book satisfies these requirements. An assignment attempt is a conditional operation.It tries to attach an object to an entity if in a given execution the object’s type conforms to the type declared for the entity the effect is that of a normal assignment.Otherwise the entity gets a special “void” value. So you can handle objects whose type you do not know for sure without violating the safety of the type system.

Redefinition And Polymorphism

                     When a class is an heir of another it may need to change the implementation or other properties of some of the inherited features. A class session describing user sessions in an operating system may have a feature terminate to take care of cleanup operations at the end of a session.An heir might be remote session, handling sessions started from a different computer on a network.

               If the termination of a remote session requires supplementary actions, class remote session will redefine feature terminate.Redefinition may affect the implementation of a feature its signature and its specification.It should be possible to redefine the specification, signature and implementation of an inherited feature.


                   Polymorphism is the ability for an entity to become attached to objects of various possible types. In a statically typed environment, polymorphism will not be arbitrary but controlled by inheritance.for example,we should not allow our BOAT entity to become attached to an object representing an object of type BUOY, a class which does not inherit from BOAT.

            It should be possible to attach entities to run time objects of various possible types under the control of the inheritance based type system.An “entity” is a name to which various values may become attached at run time.This is a generalization of the traditional notion of variable.In object oriented computation,there is only one basic computational mechanism given a certain object which is always an instance of some class call a feature of that class on that object.

Inheritance

              Inheritance is one of the central concepts of the object-oriented methods and has profound consequences on the software development process.It should be possible to define a class as inheriting from another.Software development involves a large number of classes many are variants of others. To control the resulting potential complexity, we need a classification mechanism, known as inheritance.

                   Multiple inheritance raises a few technical problems, in particular the resolution of name clashes. Any notation offering multiple inheritance must provide an adequate solution to these problems.It should be possible for a class to inherit from as many others as necessary,with an adequate mechanism for disambiguating name clashes. 

                Multiple inheritance raises the possibility of repeated inheritance,the case in which a class inherits from another through two or more paths.Precise rules should govern the fate of features under repeated inheritance allowing developers to choose separately for each repeatedly inherited feature between sharing and replication.

                 The combination of  inheritance and genericity brings about an important technique constrained genericity, through which you can specify a class with a generic parameter that represents not an arbitrary type as with the earlier form of genericity but a type that is a descendant of a given class.The genericity mechanism should support the constrained form of genericity.

Static Typing

               Every entity is explicitly declared as being of a certain type, derived from a class.Every feature call on a certain entity uses a feature from the corresponding class. Assignment and argument passing are subject to conformance rules, based on inheritance, which require the source’s type to be compatible with the target’s type. In a language that imposes such a policy.

               It is possible to write a static type checker which will accept or reject software systems, guaranteeing that the systems it accepts will not cause any “feature not available on object” error at run time.A well defined type system should, by enforcing a number of type declaration and compatibility rules, guarantee the run-time type safety of the systems it accepts.
                 
                  The language should make it possible to equip a class and its features with assertions relying on tools to produce documentation out of these assertions and optionally, monitor them at run time.The features of an abstract data type have formally specified properties, which should be reflected in the corresponding classes. Assertions routine preconditions, routine postconditions and class invariants play this role.

                       Assertions describe the effect of features on objects, independently of how the features have been implemented.They help produce reliable software and provide systematic documentation and they are a central tool for testing and debugging object oriented software.In the society of software modules with classes serving as the cities and instructions serving as the executive branch of government, assertions provide the legislative branch. We shall see below who takes care of the judicial system.

Seamlessness

              The object oriented approach is ambitious.It encompasses the entire software lifecycle When examining object oriented solutions, you should check that the method and language, as well as the supporting tools, apply to analysis and design as well as implementation and maintenance.The language in particular should be a vehicle for thought which will help you through all stages of your work.
 
                 The result is a seamless development process, where the generality of the concepts and notations helps reduce the magnitude of the transitions between successive steps in the lifecycle.These requirements exclude two cases, still frequently encountered but equally unsatisfactory.The use of object oriented concepts for analysis and design only, with a method and notation that cannot be used to write executable software.

                The use of an object oriented programming language which is not suitable for analysis and design.An object oriented language and environment together with the supporting method, should apply to the entire life cycle in a way that minimizes the gaps between successive activities.Object orientation is primarily an architectural technique.Its major effect is on the modular structure of software systems.

             The key role here is again played by classes. A class describes not just a type of objects but also a modular unit.In a pure object oriented approach.There is no notion of main program, and sub programs do not exist as independent modular units.There is also no need for the “packages” of languages such as Ada, although we may find it convenient for management purposes to group classes into administrative units, called clusters.