|
Distributed objects for businessGetting started with the next generation of computingIn collaboration
|
This white paper explores object-oriented technology from the perspective of business and information technology professionals responsible for planning, designing, constructing and maintaining corporate information systems. This paper presents the essential concepts of a very complex subject and relates these concepts to the practicing professional who requires clear and practical information, rather than the theoretical underpinnings of the technology.
The paper answers fundamental questions such as: What is object orientation? Is object-oriented technology only for programmers? Why is it so important to business in the 1990s? What systems development methods are required to design and deploy object-oriented information systems? How do companies make the transition to object orientation?
Mail this article to a friend |
21st century business technology
The seed for object-oriented technology was planted in the 1960s in
Norway. Kristen Nygaard and Ole-Johan Dhal of the Norwegian Computer
Center developed a programming language, Simula-67, to support the
modeling of discrete event simulations of scientific and industrial
processes with direct representation of real world objects. Why, a
quarter of a century later, has the business world begun to express
great interest in object-oriented technology?
Today's business is under assault from multiple, simultaneous revolutions from both business and technological arenas. Author Rob Mattison1 explains that most large corporations are coping with concurrent:
Businesses that will thrive in the next millennium will have overcome the chaos introduced by these concurrent revolutions. Significant business reengineering efforts are under way in corporations that have recognized the need for major change. Some companies have undertaken radical change efforts while others have sought a gentler approach through continuous process improvement.
As with most major endeavors, the task of business redesign is not as easy as it may first seem. Both business processes and computer information systems are extremely complex. The increasing complexity of applications is compounded by the fact that corporations are automating mission and life critical applications. Our approach to taming complexity and bringing order to chaos requires new approaches to problem-solving and new ways of thinking about business processes and information systems.
The critical challenge facing today's IS organizations is to provide leadership in business process improvement. The information systems that are needed for tomorrow's business must be far more robust, intelligent and user-centered than the data processing systems of today. Unfortunately, the increased sophistication of tomorrow's information systems involves the introduction of complexity that overwhelms the current approaches to systems development.
|
|
|
|
Lessons learned from initial efforts with business process redesign are many. A 70 percent failure rate of initial reengineering efforts indicates the degree of difficulty. Lessons of note for this discussion include:
Redesigned business processes are not "final products." Business processes must be designed for constant change. Constant change in both the worlds of business and technology requires that change must be a first class business concept and process. As business processes change, the underlying information systems must also change, and in competitive industries, the lead time for such change approaches zero. Rapid change requires that both business processes and their underlying information systems be modeled and evolved together.
Object-oriented technology is based on simulation and modeling. Although this may be interesting in and of itself, the use of models represents a breakthrough in the way business information systems are developed. Instead of deploying the traditional application development life cycle, models of a business or business area are constructed. These models are shared by individual computer applications. Essentially a "computer application" becomes a unique use of the model, not a separate development activity resulting in stand-alone software constructed for "this application only."
The quality of the model is a key determinate of "reuse" and adaptability. The model itself must be designed for change. Business processes change and their change is based on using the business model to simulate proposed processes. Modelers can play "what if," run simulations of various process alternatives, and learn from the simulations.
The focus of IS shifts from applications development to the enhancement and maintenance of common business models. Business models and software models become one and the same. Applications become derivatives, alternate views and refinements of the business models.
The modeling approach to business innovation is not possible without a software approach suited to the task. For these business reasons, object-oriented technology has become of vital interest to both commerce and industry. Business and technology must be fused if corporations are to maintain the competitive advantage. Object-oriented technology can be the foundation for that fusion. With object-oriented technology, change and the management of complexity are first-class concepts. Object technology holds great promise as a means of designing and constructing the adaptive information systems needed for 21st century business.
An object-oriented technology primer
Object orientation, as a way of thinking and problem-solving, applies
to a broad spectrum of technology disciplines and tools including:
However, object orientation is not limited to technology, it also provides a way of thinking about business and business processes. Leading thinkers such as Ivar Jacobson2, David Taylor3, Robert Shelton4, and James Martin5 have developed the foundation for applying object orientation to business modeling and problem-solving.
In the interest of providing a practical guide to object-oriented technology, we will begin our discovery of the essential concepts by limiting our initial definitions to the key ideas behind the technology. We will focus on object concepts "in-the-small" and later scale up our discussions to broaden our understanding of object orientation at the enterprise level.
What are the fundamental concepts of object orientation? Object orientation is a way of modeling real systems in the real world. Within object orientation, there are several fundamental concepts that contribute to the modeling process. Since people regard the world around them in terms of objects, business and software models based on real world objects will reflect reality more naturally. Thus business object models can be easier to understand and communicate than traditional computer-centered models. Business people think in terms of people, places, things and events. Examples of business objects include: people and the roles they play (stock clerk, head cashier), places (store, warehouse, shelf), things (cash drawer, check out lane, delivery van), and events (sale, delivery, payment).
What is an object? An object is a self-contained software package consisting of its own private information (data), its own private procedures (private methods) that manipulate the object's private data, and a public interface (public methods) for communicating with other objects. An object contains both data and logic in a single software entity or package. Objects provide properties representing a coherent concept and a set of operations to manage these properties. The fusion of process logic with data is the distinguishing characteristic of objects.
Each object is capable of acting in much the same way as the real object behaves in the real world. Objects are assigned roles and responsibilities, and they contain all of the information they need to carry out their actions. The only way to use an object is to send it a message that requests a service be performed. The receiving object acts on the message and sends the results back as a message to the requesting object.
The object-oriented developer surrounds him or herself with objects relevant to the tasks to be automated. If an office-related application is being developed, objects in the mind's eye of the developer may include pencils, file folders, word processors, spelling checkers, in-baskets and documents. The developer's task is to create new objects that use messaging to communicate dynamically with the other objects in the application setting.
When a service is requested of an object, the object is sent a message, much like a traditional function call. However, the difference is that the rest of the system does not see how the object is implemented and cannot suffer any integration problems if the object's internal implementation (code) is changed. This means programming without assumptions, where functionality is well-defined and programmers do not make assumptions about how the shared routines or systems work on an internal level.
Additionally, object orientation inherently supports and extends the concepts of modularity. "Chunking"6 is a mechanism used to build larger components from smaller components so that any view of a problem under study typically is limited to seven (nine at the very most) components at one time. These are the maximum number of concepts that the human mind can maintain at one time. This approach to modularity results in object-based systems being very granular or being formed by simple components that can be reused in other systems. Such reuse is the key to many of the benefits of object technology: productivity, quality and consistency. Another important benefit is that modifications tend to be local to a single object. Thus, maintenance is simplified and less costly. Changes are automatically propagated throughout all the systems of the enterprise.
From a technical perspective, an object may be more precisely defined. The first principles include: encapsulation, inheritance and polymorphism. These concepts were originally applied "in-the-small" as a means of programming, but today carry over into other object technologies, such as analysis and design methods. Our next step is to explore these concepts to deepen our understanding of objects.
Encapsulation
All, or at least most, of the descriptions of operations (behavior) and
data (attributes) within an object-oriented model or system reside
within objects. The only way to use or manipulate an object is to send
it a message. The hiding of internal information within objects is
called encapsulation. To use an object, the developer needs only to be
aware of what services or operations it offers (to which messages the
object responds).
A system comprised of objects is constructed in a modular fashion, in which each module (object) is engaged only through its defined interface (its messages). Objects do not become interdependent on their internal code or structure. The advantage of encapsulation is that the implementation of objects can change (being improved or extended) without having to change the way the object is used by the rest of the system. The result is that changes tend to be local to an object and maintenance is simplified.
Furthermore, as object-oriented information systems are implemented, additional reusable components (objects) become available so that programming becomes more a matter of "assembly" rather than coding. As an object-oriented infrastructure matures, programming is reduced to "exception" programming based upon modifications of existing objects. Such assembly and exception concepts are demonstrated by the new graphical programming environments where objects are simply "wired" together to create applications.
Classes and inheritance
In any given information system, many of the objects will have similar
characteristics such as information structure and behavior (procedures
or methods). Further, large scale commercial systems may be composed of
hundreds of thousands, perhaps millions of objects. Organizing
principles must be adopted to avoid a chaotic environment.
The concept of "classes" brings order to the world of objects. Classes are templates used to define the data and methods of similar types of objects. An object created from a class is referred to as an instance to distinguish the object from the mold from which it was created (the class). Object-oriented programming languages and analysis and design methods both use classes to provide a means to share common characteristics among objects.
Some objects of the same general type may have specialized characteristics. A mechanism is provided to address specialization. This mechanism is called inheritance. As the name implies, inheritance is a feature that allows one class of objects to acquire, or inherit, some or all of its information structure and behavior from another class, rather than force the developer to define or code the structure or behavior over again. Inheritance is a very useful mechanism for reuse.
Generalization and specialization are not unique to object-oriented technology. Many disciplines such as engineering and architecture use these concepts and techniques to manage complexity. And other information modeling methods provide support for sub-typing. Object-oriented methods and implementation languages support inheritance directly. They provide mechanisms to inherit both properties and operations.
Inheritance, as a core feature of object-oriented systems, will provide the developer with some important advantages. First, inheritance introduces support for code reuse at a language level. If a developer needs to change the way several classes of objects perform a certain task, the modified behavior is applied to each of the classes via inheritance. The developer needs to perform the change in only one place. Inheritance makes it very easy to modify information systems. Second, inheritance reduces redundancy. Duplicating code is discouraged by making it easy to reuse existing code.
Inheritance is a powerful concept for managing complexity. Most object-oriented languages do not provide support for re-classifying an object once it is instantiated. Over time, however, the meaning of an object to a business may change, requiring a re-classification of the object.
Emerging developments in object-oriented technology include new styles of inheritance called "prototype/instance" and "multiple dynamic classification." While few languages currently support these constructs, they will become very important in future information systems. Multiple dynamic classification enables this capability at run time without predetermined programming. In business terms, this would allow the creation of a class like Person, whose instances can take on roles like Employee, Customer and Stockholder simultaneously.
Polymorphism
Polymorphism is a Greek term meaning "many forms." When applied to
objects, polymorphism allows the developer to use one name for similar
kinds of operations or functions. Rather than create unique names such
as drawCircle, drawRectangle or drawSquare, a single method named
"draw" may be used. The polymorphism mechanism of the receiving class,
depending on the kind of object sending a message, will implement the
appropriate method such as draw a square.
Classes represent real things in the business problem domain. Properties and operations are defined for these classes. The names of properties and operations are scoped to the class to which they belong. This allows different classes to declare the same name for a message that invokes a method or operation. It is up to the developer to ensure that the operations of multiple classes with the same method names are semantically equivalent.
The method to "pay" an employee may be defined in a general employee class, but when the message "pay" is sent to a sub-class of employee, say manager, the subclass will override the general method and apply operations unique to managers. Polymorphism can eliminate the need for complex IF, ELSE and CASE structures and can enhance the use of inheritance concepts. Objects relate to other objects in the abstract and need not be concerned with the details of how other objects select and accomplish their operations.
For example, within a banking application with class Account and sub-classes Checking Account and Savings Account, a Manager object may send a calculate interest message to each and every account without regard to or knowledge of whether each account is a Checking Account or a Savings Account object. Likewise, each account type may perform a different calculation to arrive at the interest, and the system using the object does not have to consider every possible case. Over time, the individual calculations can change, or new account types can be added, all without modifying the using program.
Polymorphism also accommodates the temporal aspects of information. Allowing objects to be aware of their versions (changes in attributes and methods over time) and incorporating them in a persistence mechanism (the object-oriented equivalent to files and databases), results in systems that can be written to gracefully handle business change over time while maintaining historical integrity. In other words, the same message sent to exactly the same type or sub-type of object will apply the correct version of the method. This is something difficult or impossible to do with traditional techniques.
Again, what's an object?
In anthropomorphic terms, we can think of objects as individuals. Each
individual has special knowledge (attributes) and special skills
(methods), some of which it gains from its ancestors (inheritance). By
communicating with each other (messaging), individuals are able to work
together to accomplish very complex tasks - much like a society.
Different individuals may react to the same request by performing
different steps as long as the results are consistent with the intent
of the request (polymorphism). It is neither necessary nor desirable
for the individuals to disclose to the requesters exactly how they
accomplish their tasks (encapsulation) and it is not acceptable for
requesters to tell an individual how to accomplish something; only what
to accomplish (micro-management is not allowed). Finally, individuals
are free to determine the appropriate response when a request is
received and are not tied to a prior determination of the best response
(dynamic or run-time binding). Late binding means that programs are
written to abstract classes. The actual types are determined
at run-time, and so behavior is also determined at run-time.
Objects encapsulate their knowledge and behavior, inherit common functionality, communicate entirely via messages, and respond appropriately based on their individual capabilities when messaged by other objects, often determining the exact response at run-time. Various object-oriented languages support these features to differing degrees, but the essentials of the technology are the same, regardless.
The object advantage
Object technology brings four key advantages to modern computing:
While design is not easy, object orientation makes more sense than trying to design the application in terms of inputs, outputs and data flows, and then expecting the results to resemble the business. Modeling with objects that reflect real business entities and concepts means that changes in the business model can be directly implemented in the underlying software. In addition, the real world nature of objects enables business people to converse with corporate developers in the same language of recognizable business objects.
Object-oriented technologies
As discussed earlier, object-orientation has been applied to a number
of technology areas. These areas include object-oriented programming
languages, analysis and design methods, databases, and operating
systems. In the discussion that follows, we build on the fundamentals
presented in the last section and progress from definitions to
discovery of how object orientation applies to each technology area.
Object-oriented programming and class libraries In simple terms, object-oriented programming is the application of object-oriented principles and constructs to computer programming. However, these principles have been applied to differing degrees within programming languages that are labeled as "object-oriented." To be a true object-oriented programming language, a programming language must support the three pillars of object orientation: encapsulation, inheritance and polymorphism.
A programming language that supports encapsulation allows the programmer to define classes that contain both methods (behavior) and attributes (data). Some languages suggest that they support inheritance by stating that their tool includes a set of what the vendor calls "objects." But the questions that must be asked are "Can I define my own classes?" and "Do these classes encompass behavior as well as data?" The answer should be "yes" to both. Many of the popular development tools claiming to be object-oriented support only encapsulation. True object orientation requires support of encapsulation plus inheritance and polymorphism.
Support of inheritance varies among the object-oriented programming languages. In defining a class, a programming language should allow its user to designate that a class inherits behavior (methods) and data structure (attributes) from another class. A change to the super class should be automatically propagated to the inheriting sub-classes.
Some languages support the ability to inherit directly from two or more classes. This is called multiple inheritance. Multiple inheritance is a powerful construct, but is not required to consider a language as object-oriented. Systems designed using multiple inheritance can be redesigned using single inheritance.
Support for polymorphism also varies among languages. Yet, an object-oriented language should allow a segment of program code to send a message to an object knowing that the receiving object will respond correctly even if the precise class of the object is not known.
Programming languages that support only the concept of encapsulation are referred to as object-based. According to this classification, Ada and Visual Basic are considered to be object-based because they support the encapsulation concept. Smalltalk, C++, Objective-C, Simula and Eiffel are examples of object-oriented languages.
Within an object-oriented programming language, object templates are created by the programmer to define the characteristics of the system's objects. Such a template is called a class. A class defines an information (data) structure and the behaviors for an object. From these templates, objects are created by the system at run-time. Hence, within an object-oriented system, each object belongs to a class. An object that belongs to a certain class is said to be an instance of that class.
In written materials on object-oriented programming, the terms object and instance are frequently inter-changed. Class libraries are collections of classes that provide certain functionality and can be reused in application development. Class libraries are roughly the same as libraries in the traditional programming sense, but are in the form of classes and can be sub-classed as well as used directly. For example, an accounting class library could be purchased from a supplier of accounting software and modified to meet the user's specific needs without modifying the vendor-supplied code. Modification is accomplished by inheriting functionality from the vendor-supplied classes, then programming the exceptions. Several class libraries are available commercially that implement graphical user interfaces (GUIs), persistence, relational database encapsulation, inference processing, real time controls and communications.
The commercial viability of domain class libraries (for example, accounting) will increase over time. The Object Management Group (OMG) has established a special interest group dedicated to defining standards for business objects and processes. These standards should aid in building momentum in the domain class library market.
Object-Oriented Analysis & Design (OOA/D)
The "structured" methods of the past decades contributed significantly
to the development of reliable data processing applications. Yet, when
applied to the application portfolios needed in the new competitive
organization, the structured methods are strained, so object-oriented
extensions are being added. For example, traditional structured methods
apply techniques to normalize data, but not logic. However, these
design techniques can be modified to normalize logic into object
classes. Modified structured methods make it possible to develop
object-oriented designs without throwing away existing skills.
Unfortunately, modified structured methods tend to promote habits and
ingrained ways of thinking that are not object-oriented. New,
clean-slate, OOA/D methods have been introduced that foster
object-oriented thinking throughout the entire analysis and design
process.
We can summarize the nature of traditional and next generation methods:
Three major differences between Structured Analysis & Design (SA/D) and OOA/D are noteworthy. The first and most obvious difference is that OOA/D incorporates the use of classes with encapsulation of data and behavior into objects. SA/D encourages the separation of data and behavior early in the process. This change in thought processes can be difficult for developers well-versed and experienced in SA/D.
The second difference is more subtle. OOA/D encourages modeling the real world environment in terms of its entities and their interactions. The model of the system should look like the real-world even if there is no computer to perform the tasks. The focus is on stable, real-world concepts.
SA/D, on the other hand, encourages abstracting the real world in terms of functions, which are normally very fuzzy and subject to change in the early stages of development. Structured design separately identifies the data, its flow, its control and the processes that act upon data.
The third difference is in notations used during the stages of analysis and design. SA/D changes notations as each stage progresses. This makes tracing events and the overall repeatability of the process questionable. OOA/D uses uniform notations throughout the process, focusing on elaboration, rather than transformation.
One industry authority glibly described the difference between SA/D and OOA/D as the difference between processes pillaging and plundering across innocent and helpless data structures, and gentlemanly objects engaged in cooperative conversation to accomplish a goal. This description highlights the most valuable aspects of object-oriented models: stability and flexibility.
There are similarities between both methodologies. OOA/D incorporates some new ideas into existing SA/D techniques: message-based communications between objects, encapsulation, classification and inheritance. SA/D and OOA/D both contain the same basic parts: process, terminology, diagrams, notations, constructs, verification techniques and metrics.
Many of the current OOA/D methods do not contain all of these similarities, but OOA/D methods are currently in their infancy. SA/D methods have merged somewhat over the years to use, for the most part, common terminology, notations and diagrams; OOA/D methods have not yet evolved to this point. The basic process is fairly common: Identify the subjects and objects, and define the behavior and relationships. Also, the basic object-oriented terms remain fairly constant. However, notations are almost as numerous as the methods themselves. Most important, common techniques for verification and metrics are lacking in many current OOA/D methods. Great care must be taken in evaluating and selecting object-oriented methods.
Object database management systems (ODBMS)
The seminal work of E. F. Codd,
"A Relational Model of Data for Large
Shared Data Banks,"7 was published in 1970. During the 1980s,
corporations expended immense resources to exploit the capabilities of
relational databases.
Relational database management systems (RDBMS) were the first systems to make it possible to decouple application issues from the design of a shared database. Before the era of the relational model, the imbedded pointers and links in hierarchical and CODASYL databases required that the developer focus on computer data storage models or data structures with cumbersome pointers and links. The new relational focus could shift to logical modeling of the application domain and away from the Direct Access Storage Device (DASD) physical data structure or other implementation issues with data being truly independent from process.
The relational data model represents the third stage in the evolution of database systems:
Data within a relational database is broken down and stored in two-dimensional tables in which each item exists as a row within a table. Complex (hierarchical) data items are arranged by joining rows from different tables and building artificial constructs such as foreign keys to facilitate reconstruction of the "real world" information. Languages such as Structured Query Language (SQL) provide a means for expressing how such data may be joined.
Objects in the real world often involve hierarchical arrangements of data. Object-oriented programming languages accommodate, and in fact simplify, the management and manipulation of such hierarchical data objects. Storing and retrieving those complex structures using a two-dimensional relational database forces the programmer to implement the composition and decomposition required for storing and retrieving data.
Object databases (ODB) have emerged as a means of providing a storage mechanism for hierarchical arrangements of data. In contrast, relational databases require programmers to translate data between their in-memory representation and their storage representation. An analogy would be the commuter who must take apart his car to park in his garage, then reassemble it when it is needed again. Object databases, on the other hand, store and manage data in the same way a program does -- as objects.
The advantages of an object database are threefold. First, there is no semantic difference between data stored within a database and data in memory. Composition and decomposition programming is not required. ODBMSs are unique in their absence of a data manipulation language (DML). Second, because of the way data is stored (maintaining its hierarchical relations), retrieving complex arrangements of data is often much faster than in a relational system. A side benefit is that such an arrangement makes it possible to build information systems that span a network as though it were one large machine. Third, the encapsulated models produced during analysis and design are the same as the database models. In a traditional development lifecycle, the analysis model is different from the design model, which in turn is different from the programming model, which is also different from the database model.
The strategic reason to begin implementing an ODBMS is the ability to create true information management systems. Today, some object database management systems have (and eventually all will have) the capability to query processes (methods) of objects as well as the data (attributes). This means a user will be able to query for "past due customers" without knowing the business rules defining "past due" or having to structure those rules in the query language. By introducing access to object methods, even greater detail hiding is achieved. In such an environment, business process and rule changes do not disrupt existing applications or queries. Furthermore, the capability ensures the consistency and integrity of information presented by all applications, queries, ad-hoc programs, analysis and reports across the enterprise.
Although object databases represent the next generation, relational databases offer continuing advantages to corporations with heavy investments in relational technology. The two models will coexist for years to come. Relational data structures are used within object databases, and bridges to existing relational databases (wrappers) play an important role in preserving existing assets while incrementally migrating to objects.
Enterprise migration to objects
Object-oriented technology may be thought of as "in-the-small," where
the perspective of the user is on the desktop (a single personal
computer) or within a department. Concepts, methods and tools are
relatively simple when object technology is adopted in-the-small. The
real challenge begins when the technology is applied to the overall
enterprise, when it is applied "in-the-large."
As Rob Mattison points out in his work, "New technologies and new approaches," technology cannot simply be infused into an organization like a health serum. We need to restructure the very organization itself if we expect to take advantage of them. This change must occur both within the information systems support staff (changes in roles and responsibilities) and within the corporation itself, as business people determine new ways to take full advantage of the additional power and flexibility that is suddenly made available."1
Corporations must develop well thought-out strategies for transitioning to object-oriented information systems. These strategies must consider the current business culture, technology culture, and current knowledge and skill sets of both business and IS personnel. The transition to objects is not a transition in tools and techniques. It is an evolution into a new way of doing business and a new approach to business problem-solving.
The key to successful transitions is to take small, incremental steps. Each step requires three fundamental activities: education, training and experience. Once a step has been completed, the scope can be expanded, and the next iteration can proceed with additional education, training and experience. The transition requires heuristics as there are no cookbook or silver bullet solutions. Learn a little, apply a little; learn a little, apply a little. Each iteration increases the overall understanding of the object paradigm and the deepening understanding fosters better decision-making for the next iteration.
Since corporations are under assault from simultaneous forces, the adoption of object-oriented technology cannot be revolutionary. Transitions must be incremental and iterative. They must be evolutionary and steeped in today's existing business realities.
Transitioning to the object paradigm is not a computer system "conversion;" it is a process of assimilation. Furthermore, the process is a process of risk and asset management. The massive investments corporations have in existing information processing assets provide the life blood of current business operations. We are not "converting to a new system," we are designing new types of robust information systems that leverage existing corporate information assets.
Since the corporate goal is the insertion of object-oriented technologies into an already complex environment, successful transition strategies can deploy object-oriented notions to manage the complexities. We may speak of an "object-oriented adoption method" to convey notions of managing complexity throughout the assimilation process. Start small, build proof-of-concept prototypes, increment the scope of the problem domain, and iterate.
Although these beginnings are small, they can scale-up in an orderly fashion. The subject of developing a complete assimilation method is treated in detail in our white paper, Object-Oriented Knowledge Transfer. Beginning with proof-of-concept projects, a corporation can kick the tires, initiate low-risk demonstration projects, establish a corporate Object Technology Center (OTC) to provide a focal point, and develop learning processes to expand the initial knowledge base. These are a few of the components of an overall enterprise assimilation strategy. Following are some previews of the issues and guidelines covered in the white paper on transitions:
Recognize that object-orientation is not only a technology, but also a framework for thinking about and solving problems. Mastering object technology in business requires a solid under-standing of the concepts of object orientation, as well as the fundamentals of business process improvement. This will result in a synchronicity between the systems developed and the business that they model. As David Taylor points out in his book, Business Engineering with Object Technology, "Instead of managers posing problems for technologists to solve by creating new applications, the two groups must work together to create working software models of the organization."3
By definition, "new ways of thinking and problem-solving" involve steep learning curves. Initial corporate pilot or proof-of-concept projects require hiring technology experts and teaming them with corporate domain experts. Given the assistance of an experienced object technology team, a few well-suited practice projects and the patience to follow a step-by-step process, the paradigm shift can be made and the technology mastered. Mentoring and team learning, centered on live pilot projects, are the keys to the paradigm shift.
Above all, reasonable expectations must be set. The first few projects that are tackled should not attempt to demonstrate all the benefits of objects. The goals of the initial projects should be focused on exploration and discovery. A beginner is not expected to read a book on the techniques of chess and then successfully play in a tournament. No amount of book learning or classes will replace the need to get several projects under the belt and gain a level of experience comparable to the experience we have with more traditional technologies. Also, many of the benefits of object technology come from a well-developed approach for reusability; in the beginning, there is nothing to reuse.
Pilot projects should be small and low risk. Although proof-of-concept projects must be based on live business applications to be meaningful, they must not be disruptive or mission-critical. A non-critical project that does not require a large team is best. The experts hired to assist in transitions may surprise you by letting you make mistakes. Making mistakes, adjusting, making more mistakes, and making more adjustments is how we learn to use new skills with proficiency. We do not want the expert to do our thinking for us, and true technology transfer experts view mistakes as learning-in-progress.
Find a mentor. There is no substitute for experience. The best results have come from small teams working with an experienced, object-technology practitioner in a craftsman-apprentice relationship. This approach may be one of the best places to apply a consultant to an object project. Additionally, a good consultant, who truly knows object technology, can help develop the architecture and build the infrastructure that is necessary for success with object orientation on an enterprise level. The biggest mistakes early adopters have reported is that they did not build an infrastructure with strong configuration management capabilities, and did not have a guiding architecture. Both are necessary to ensure reuse and the associated benefits of productivity, quality and consistency.
Choose the pilot project team members carefully. The team should consist of individuals that are fairly proactive and open to new ideas. These people will become mentors to the rest of the company on future object projects and should be chosen accordingly.
Pick a technology. Do not try them simultaneously. A visual programming tool provides the most concrete and understandable implementation of the object-oriented concepts. By its nature, a visual programming tool prevents us from falling into established habits of procedural styles of programming. The technologies will work best when you apply object orientation throughout all stages and areas of the system life-cycle. However, as a word of caution, learn them one at a time. Be aware of object-oriented versus object-based tools. Object-based languages/4GLs are easy to learn, fast prototyping tools. However, to get the long-term benefits of objects, you will need the features of a true object-oriented tool supporting all three pillars of object orientation. Also remember to pay as much attention to your infrastructure as you do to the first application you build.
Consider adopting an object-oriented analysis and design method whether you build the systems using object-oriented programming or not. This approach can be a gentler introduction to object-oriented concepts, while suppressing the details needed for consideration in implementations. Companies have reported success with this approach (highly granular, client/server-based systems are not terribly different than object-based systems). Remember, the biggest payback in reuse comes from the reuse of design. Accordingly, adopt a design technique that encourages reuse.
The bottom line
As corporations explore the shifts to business process improvement and
emerging technologies, the overall goal is to realign technology with
business. Much attention has been focused on client/server computing.
However, as discussed in The Technical Resource Connection's white
paper, Client/Server Architectures, the initial approaches to
client/server computing have built-in limits due to coarse granularity
in partitioning the business problem domain. Object-oriented technology
can provide the organizing principles for fine-grained client/server
architectures. Object-oriented client/server computing is referred to
as Distributed Object Computing, and the fundamentals of this emerging
technology are explained in our white paper on the subject.
The bottom line for the business world is that now is the time for corporations to begin to adopt object-oriented frameworks if they are to come to grips with client/server architectures, and if they are to construct the robust information systems needed to compete in the next millennium.
Business applications of the future will need to be spread across multiple, and sometimes specialized, hardware that will cooperate with other hardware, as well as the legacy computer systems we have today. To meet the demands of business, the information systems we build today and tomorrow must be based upon distributed object computing technologies and paradigms.
The way we build systems must change because the businesses they support must change. Not only must businesses integrate islands of information internally, they must understand their position in the value chains in their industry, and establish interenterprise or virtual corporations. Extended corporations reach out to customers, suppliers, affiliates and even competitors. The airline industry exemplifies such extended business relationships: "I'm sorry, all our flights are full, but I'll be happy to book your reservation on Competitor Air flight #101."
Extended corporations reach out not only with business relationships, they must integrate their information systems. Customers, retailers, distributors and manufacturers will blur into a business ecosystem where it is impossible to know who is whom. Virtual corporations operate globally, 24 hours a day, seven days a week. Work and tasks follow the sun, reducing business cycle times. The resources of virtual corporations ebb and flow with the changing needs of the moment.
The only way the virtual corporation can work is through information technology. And the only way technology can work is through information systems that mirror human cognition, the way people think when accomplishing work. Information systems must be created from human-centered designs, not technology-centered designs. They must be based on human cognition -- they must be based on reality. Human-centered, distributed object computing is the next generation of business technology. This technology will enable corporations to construct the adaptive information systems needed for 21st century business.
Business technology: the next generation
The velocity of business change is increasing. Business and product
cycle times are decreasing. Management is under intense pressure to
streamline operations, reduce overhead and squeeze more out of
production and sales channels in order to maximize shrinking margins.
The global marketplace is becoming a business battle-ground as
companies reach into all corners of the world to attract new
customers.
To gain the advantage, forward-thinking businesses are redesigning core business processes. Such companies are becoming smaller and more horizontal as layers between top management and the shop floor worker or sales clerk are removed. Organizations are becoming driven by knowledge-based workers as people on the shop floor and in the field are empowered with information and the authority to make tactical decisions themselves.
Information is power. Corporations are grappling with exploding demands for information. Competitive pressures make it absolutely necessary to connect islands of information, resources and people together into a cohesive whole. The new business objectives demand a fully integrated information framework and infrastructure. Furthermore, the entire workforce must have access to this common information infrastructure. From a technology perspective, this means universal access that is both transparent and adaptive. This business objective has a profound impact on the mission and on the very nature of commercial enterprise computing. Today's computing architectures and design methods for constructing information systems are simply not capable of handling the requirements of next generation computing. And, as Jim Stikeleather explains in an Object Magazine article, "There are not enough programmers in the world to meet the demands of new information systems being generated by newly reengineered businesses... at least, not using traditional practices and technologies."8
Next generation information systems shift much of the business information residing in workers' heads to computers. For this to happen, the following must take place:
The design of next generation information systems requires a new way of thinking that changes the very nature of design -- a paradigm shift. Distributed object computing supplies a paradigm for building universal, transparent and adaptive information infrastructures and systems.
As corporations become more extended and externally integrated, then the community memory and organizational knowledge become the property of the information systems, not of the people of the organization. Consequently, not only are new types of systems necessary, but also new ways of human interaction with information systems are needed. With current systems approaches, the onus is on humans to try to make sense of information from disjointed presentations of data. New types of information systems are needed that place the onus on the computer system to correctly convey information in a form that humans can process naturally. The information must reflect reality as perceived and understood by humans, not the artificial constructs of transactions, tables and spread-sheets characteristic of today's systems. Where it once stood on the sidelines, cognitive science plays a central role in new era information systems. Systems rooted in human cognition can enable instant use (no training required), correct assimilation, confirmation of user intentions and error-free communication between man and machine. Such systems are required by the extended enterprise.
The challenge of enterprise computing
The recent attention and investments in client/server technologies are
in response to the strategic demand for enterprise computing. However,
as explained in The Technical Resource Connection's white paper,
Client/Server Architectures, current development methods, tools and
architectures have hit technical and information walls. Maximum limits
have been reached. Today's popular development tools and techniques
suffer a number of problems concerning application scalability,
modularity, granularity and maintainability. In addition, they
overwhelm the ability of users to assimilate diverse information. As
such, these tools and techniques will not advance us much past our
current position with respect to our ability to develop the necessary
information systems in a timely and cost-effective manner.
Client/server technology is essential, but not sufficient. The issues limiting systems development today are not just technical. In our quest for supreme technology, business concepts have remained second-class citizens, standing in the shadow of a high performance infrastructure. Clients and servers continue to evolve technically but, in these implementations, the business itself is not visible. Client/server technology, by itself, does not supply a uniform cognitive model for sharing information among systems and people.
Jeffrey Sutherland highlights the pitfalls of the typical database server model where application logic is embedded in proprietary scripts in a user interface and hardwired to data elements in a relational database server. In what he calls the "powerblender" syndrome, business logic and interface screens are blended together resulting in redundancy and maintenance problems.9
Sutherland maintains these and other limitations of the current client/server models "promise to make them tomorrow's legacy systems." Even as "tiers" are added to decouple business logic, user presentation, network operations and database processing, fundamental design methods center on user screens and forms. Forms or screen-based designs do not promote reuse nor do they support workflow or other applications that are identified as a result of business reengineering. What are conspicuously missing from most client/server models are the key abstractions of the business. Again, the issue is assimilation of information by people. This is where object orientation adds significant value to current client/server models.
Client/server design and development needs to incorporate the power of object-oriented technology. The graphical user interface (GUI) portion of client applications may appear object-oriented, but contrary to vendor claims, client/server development needs to make much more extensive use of object technology at both the client and server level. Thus, many "object-based" tools do not allow developers to leverage object technology. Object-orientation in this context is much more than attaching procedural scripting language to graphical interface objects. It is beyond the GUI to encompass application logic that does not necessarily have a display component. In addition, object-oriented design methods produce fine-grained objects that may be distributed to any and all resources on the network.
Conversely, object-oriented technology needs to harness the power of client/server architectures. The enterprise object advantage is not realized when objects are applied "in-the-small," on the desktop, or within a department. Objects need the distribution advantages of client/server architectures if reuse is to be achieved beyond individual applications.
When the two technologies are combined as one, the next generation of business computing will have arrived. No, the next generation is not quite here, but we are standing at the threshold. When it arrives, we can meet the challenge of enterprise computing.
The challenge of enterprise computing is to support the new management structures and work procedures evolving in business today. Twentieth century technology was used to automate nineteenth century management structures. Today, progressive companies are reinventing themselves by replacing management structures and redesigning jobs. All of this redesign is aimed at adding value in a global marketplace that operates in the context of next generation information technology.
The challenge is to align information technology, business strategy, processes and organizations for a business environment characterized by rapid, near constant change. This complex environment requires very sophisticated information systems. Such systems can be built only if the complexities of the business and its underlying technology are modeled with clarity. The business and software models must be built so they are understood and owned by business professionals, not technologists. These models must reflect and support reality -- distributed, independent, continuous and real-time business processing.
The blending of the cognitive and semantic integrity of objects with the distribution potential of client/server architectures holds great promise for meeting the challenge of enterprise computing. This new computing paradigm, distributed object computing, is explained in the next section.
Distributed object computing
Distributed object computing is a breakthrough framework for computing
that has resulted from the convergence of object-oriented and
client/server technologies. When radio and motion picture technologies
converged, something completely new happened. Unlike the radio and the
movies, television pervaded and fundamentally changed society.
Television combined the distribution advantages of radio with the
richness of real-world information contained in moving pictures. The
result was far more than the sum of the two technologies. Distributed
object computing blends the distribution advantages of client/server
technology with the richness of real-world information contained in
object-oriented models. Distributed object computing will fundamentally
change the information landscape of business, and something totally new
will start to happen. The way business software is developed will
change forever.
Distributed object computing is a computing paradigm that allows objects to be distributed across a heterogeneous network, and allows each of the components to interoperate as a unified whole. To an application built in a distributed object environment, and as expressed in Sun Microsystem's slogan, the network is the computer.
Business processes are essentially human phenomena. Modeling the real business processes is not a matter of modeling organization charts and company policy manuals. Real business modeling requires that we model the ways work is actually accomplished, the ways things really happen: over the phone, through e-mail, and by way of the informal human network. Successful models cut straight to the core of real business processes. They capture the real business entities and operations that accomplish work and produce value. Object orientation was developed in the 1960s to provide the capability to build models that reflect real systems.
Objects interact by passing messages to each other. These messages represent requests for information or services. During any given interaction, objects will dynamically assume the roles of clients and servers. The physical glue that ties the distributed objects together is an object request broker. The object request broker (ORB) provides the means for objects to locate and activate other objects on a network, regardless of the processor or programming language used to develop either client or server objects. The ORB makes these things happen transparently to the developer. Thus, the ORB is the middleware of distributed object computing that allows interoperability in heterogeneous networks of objects. ORBs provide a means for locating, activating and communicating with objects while hiding their implementation details from the developer. The significant benefit of ORBs is that they remove the network messaging complexity from the mind's eye of the developer. The view of the developer is refocused from technical issues to the business objects and the services they provide.
When objects are distributed across a network, clients can be servers and conversely, servers can be clients. That really does not matter since we are talking about cooperating objectsd: The client requests services of another object, the server object fulfills the request. Clients and servers can be physically anywhere on the network and written in any object-oriented programming language. Although universal clients and servers live in their own dynamic worlds outside of an application, the objects appear as though they are local within the application since the network is the computer. In essence, the whole concept of distributed object computing can be viewed as simply a global network of heterogeneous clients and servers or, more precisely, cooperative business objects.
Furthermore, valuable legacy systems can be "wrapped" and appear to the developer to be objects. Once wrapped, the legacy code can participate in a distributed object environment. Wrapping is a technique of creating an object-oriented interface that can access specific functionality contained in one or more existing (legacy) computer applications.
Each set of wrappers carves out a slice of legacy code and represents it as a real world object (person, container, shipment). Corporations have significant investments in computer systems that were developed prior to the advent of object-oriented technology. Even though a corporation may want to migrate to object-oriented technology to derive its benefits, millions or billions of dollars of existing computing resources cannot be scrapped just because a new technology comes along. There is not a solid business case for "converting" legacy assets to the object paradigm. However, a strong business case can be made for an evolutionary approach to building new generation applications that incorporate and leverage existing assets.
Several approaches to wrapping legacy systems can be taken, from simple "screen scraping" to direct function calls to existing code. Approaches will depend on what legacy asset is being wrapped: A mainframe COBOL application, an Excel spreadsheet on a microcomputer, or a relational database server in an existing client/server application all require different approaches. Regardless of the means used to wrap legacy applications, the result is that existing assets can become full participants in a distributed object application.
In a distributed object environment, an application supports a business process or task by combining active business objects. Component assembly is becoming the dominant theme for developing distributed object applications. Object interaction is accomplished through a sophisticated messaging system that allows objects to request services of other objects regardless of the machine or machines on which they physically reside.
Objects only know what services other objects provide (their interfaces), not how they provide those services (their implementation). The hiding of implementation details within objects is one of the key contributions of object-oriented technology to managing complexity in distributed computing. In a distributed object environment, the application developer does not even have to consider what machine or programming language is used to implement the server objects. The user's view of an application consists of objects that may be written in C++ and running on one machine, a COBOL program running on a mainframe, a Smalltalk object running inside one user's workstation, and an Excel spreadsheet running on a microcomputer. The user is not concerned with these platforms and programming environments. The user simply sees objects interacting with one another as a unified whole.
The objects appear to the user and developer as familiar business objects, not machines, networks and programming languages. Users and developers do not have to think in terms of the technology, only in terms of familiar business objects.
Business objects provide pre-assembled business functionality that can be used to wire together and customize applications. All the while, business objects hide the complexities of "back-end" database processing and other technologies. The Object Management Group defines a business object as "representations of the nature and behavior of real world things or concepts in terms that are meaningful to the business. Customers, products, orders, employees, trades, financial instruments, shipping containers and vehicles are all examples of real-world concepts or things that could be represented as business objects. Business objects add value over other representations by providing a way of managing complexity, giving a higher level perspective, and packaging the essential characteristics of business concepts more completely. We can think of business objects as actors, role-players, or surrogates for the real world things or concepts that they represent."10
Actually, business applications are just "smart" views of the business objects, not the stand alone chunks of code known as applications today.
In summary, distributed object computing is an extension of client/server technology. However, there is a difference in its working process and its implementation. With client/server, there is generally an application running on a client computer while another application runs on a server computer. These two applications communicate across a network and relay data to one another, usually via some middleware provided in the form of an application program interface (API) or function call library.
In a sense, client/server is a restrictive version of distributed object computing. A distributed application is made up of objects, just as any other object-oriented application. However, the objects of a distributed object application may be split up and run on multiple computers throughout a network.
Distributed object computing is not magic. It is a very complex computing environment and requires a very sophisticated and robust technology infrastructure. The information technology infrastructures of the future must be based on very sound architectures as we will discuss next.
Architecture: the key to adopting distributed object computing
When beginning a traditional application development project,
developers typically do not have an overall systems architecture from
which to start. This fact is a universal problem when elevating
business computing from individual applications to enterprise-wide
information systems.
An architecture is a high-level description of the organization of functional responsibilities within a system. However, an architecture is not a description of a specific solution to a problem or a roadmap for success in design. It does not provide guidance in determining where functionality should reside and how to organize it. And, an architecture does not direct a designer to a successful, powerful or elegant solution to a specific problem.
The goal of an architecture is to convey information about the general structure of systems. In that sense, an architecture defines the relationship of system components, but does nothing to describe the specific implementation of those components. Client/server is an architecture in the sense that it defines a relationship between system components. It should be kept in mind that an architecture is not a solution in itself, just a framework for developing any number of specific solutions.
Computing architectures do not address the detailed design of an application. Today, most developers create an application as though it were the only one they will ever develop. There is no grand design of which technologies should be employed or why, no thought of how information should be encapsulated, accessed and assembled into applications, and no common framework for interaction with existing applications.
Effective application development projects require the definition of an architecture -- an overall plan for the infrastructure of information and technologies. Although there is no single, all encompassing architecture for computing, before developing next generation applications, two architectures should be devised. These are a technical architecture and an information architecture.
A technical architecture provides a blueprint for assembling technology components. A technical architecture defines what tools and technologies will be used and how they will be used. The definitions may include the definition of objects that encapsulate several databases, middleware and other technologies, as well as which development tools to use and how they will be integrated to provide a complete software project support environment.
An information architecture describes the content, behavior and interaction of business objects. These concepts build a semantically rich model of the problem domain. The information architecture prescribes the building blocks for application development. Business objects use the services of the technical architecture objects. An information architecture provides a frame-work for the information components of the business: subjects, events, roles, associations and business rules.
The basic seven-layer model that makes up a business object supports technology insulation that is necessary to provide for the business-oriented processing of the future. The bottom two layers are technology insulation and implementation layers, areas into which business developers rarely venture. The basis of these layers are traditional object-oriented programming models (class/instance) and tools (C ++, Smalltalk). The bottom layer provides the infrastructure for an object-oriented environment. The second layer supplies hardware insulation and a higher level of business intrinsics, such as "money object," which includes formatting, decimal arithmetic and currency conversion.
The third layer is where the developer creates the basic "entities" of the problem domain (e.g., item, person, truck), their associations with one another (e.g., employment = person + company), and their roles in the business (e.g., a truck can be a fixed- or leased-asset). The development environment should support both text and visual programming (like OpenStep) and visual assembly.
The next four layers provide a realm for business problem domain experts and end users. With the appropriate tools, this environment permits the visual assembly of entities from the lower layers. These layers should also support visible and maintainable rules and constraints. At the "scenarios" level, for example, users can assemble entities to interact in some scenario of the business, such as sales forecasting.
The "events," or fifth, layer essentially replaces the "traditional transaction" by allowing users to define events of the business that may influence previously defined scenarios. For example, a competitor changes his price, or a new employee is hired. Such business events tend to generate many transactions that must be applied to many traditional legacy systems. In traditional systems, several transactions must be cascaded to satisfy the requirements of a single business event. In next generation systems, scenarios will subscribe to events of interest, replacing traditional transaction models.
The sixth, or "agents," layer allows the organization of multiple events into a workflow, with the seventh and final "view" layer supplying the human interface. This layer is intelligent, recognizing both the context and content of the information being exchanged with the external world and even adapting the presentation to the abilities of the outside user.
Distributed object computing requires standards
Without standards, we would not have the world's first fully automatic,
universal network -- the telephone. Likewise, we will not have
interoperating networks of objects without standards. The Object
Management Group (OMG) is a consortium formed in 1989 to define the
standards required for distributed object systems in heterogeneous
environments. OMG's objective is the definition of the Object
Management Architecture (OMA), which includes four sets of standards:
the Common Object Request Broker Architecture (CORBA), Common Object
Services Specification (COSS), Common Facilities, and Application
Objects.
In 1992, OMG approved a standard architecture called the Common Object Request Broker Architecture (CORBA) that defined the services to be provided by an ORB. Since then, several vendors have been working on their own distributed object computing products, primarily ORBs.
To date, there are only a handful of commercial implementations of distributed object computing products available. However, several more are in the works and should be available soon. A few of the better known ORBs commercially available today or about to become available include Expersoft's PowerBroker, IBM's DSOM, Sun's NEO, Iona's Orbix and HP's ORB+. A complete and powerful ORB implementation provides C ++ developers with the ability to fully distribute application objects among various platforms and provides client access to distributed objects from Windows 3.1 and NT applications. A more in-depth discussion of ORBs is available in the book The Essential Distributed Objects Survival Guide (see Suggested Readings).
To date there are about a dozen ORB implementations. Some are commercially available now; the rest are expected to be released soon. Most, if not all, of these claim to be CORBA-compliant or plan to be in the near future. ORB interoperability is addressed by the CORBA 2.0 specification.
The standards that are essential to the next generation of computing are currently being written. Standards do not come easily, and at least one computer company is setting its own standards for distributed object computing. Still, the business community is under such competitive pressure and the need for next generation business computing is so pressing that standards are imminent.
Many companies are moving forward without complete standards by using the ability of object-oriented technology to hide implementation details. As standards evolve, they can undo their proprietary plumbing and plug in the standards. However, to be successful with this approach, the emerging standards must be tracked closely. Standards to watch include: Microsoft's OLE/COM; OMG's CORBA, COSS, CF, and BOMSIG; and CIL's OpenDoc. In addition, emerging technologies to watch include IBM's SOM/DSOM, Sun's NEO and Java, Taligent's Development Environment, IONA's lightweight Orbix with fault-tolerance and OCX support, NeXT's OpenStep and HP's CORBA 2.0-compliant ORB that sits on top of DCE.
The technical advantage of distributed objects
Object orientation can radically simplify applications development.
Distributed object models and tools extend an object-oriented
programming system. The objects may be distributed on different
computers throughout a network, living within their own dynamic library
outside of an application, and yet appear as though they were local
within the application. This is the essence of plug-and-play software.
Several technical advantages result from a distributed object
environment.
The overall technical goal of distributed object computing is clear: to advance client/server technology so that it may be more efficient and flexible, yet less complex. The benefits of distributed objects are indeed solutions to the problems with existing client/server paradigms.
The business impact of distributed object computing Distributed object computing, plus business objects, equals the next generation of business computing. New era systems are already being developed by business pioneers who understand this technology. They understand that the ultimate next generation methods and tools are still in the laboratory, but they know that sufficient standards and development resources are already in place. With a head start in their respective industries, these professionals intend to become well armed for 21st century business warfare. Time waits for no one.
The next generation fuses business and technology
Corporations are undertaking business process redesign efforts to
maintain their edge in an increasingly competitive and complex world.
As experience continues to be gained, businesses are demanding that the
methods and tools of business engineering and software engineering be
more tightly linked.
As Dr. David Taylor points out, "Convergent engineering offers a new opportunity to create more flexible, adaptive business systems by combining business and software engineering into a single, integrated discipline."11 New companies such as Open Engineering, Inc., are offering services and tools for object-oriented business engineering (OOBE).
Business and software models are simulations of the business In the future, business object models likely will be developed that are simulations of the "real" business. Consider the space shuttle. Prior to the shuttle's first pieces being riveted together, the aeronautical marvel of our time was "built" in a simulator. The simulator was not discarded after the actual shuttle was built. When a shuttle is launched, the live mission includes real-time processing by the simulator. If the real shuttle encounters trouble, NASA engineers turn to the simulator to analyze the problem and simulate alternative corrective actions. Real data from the shuttle is mirrored in the simulator.
Space exploration is fully dependent on modeling and simulation. Such simulation makes forward and reverse engineering possible in a real-time environment. This is also true of the enterprise of the future. Simulators, in the form of business object models, will be used to design critical business processes. As innovative processes are deployed, they will provide real-time data for the business simulator. As the business encounters unexpected trouble in the marketplace, it will turn to the simulator to explore corrective action. Further-more, with the deployment of intelligent components such as neural nets in the business model, business simulators will learn from the information fed back to them from real business activity. Buck Rogers? Not really. After all, object-oriented technology was developed in the 1960s as a natural approach to modeling and simulation. An intelligent simulator can learn from real data. Businesses that learn to deploy such technology will be first to identify market shifts and emerging patterns of demand. They intend to be first to market with innovations. They understand that to be second is to be last.
Businesses need intelligent systems
The flattened organizations that result from business reengineering
require that decision-making be placed in the hands of workers with
direct customer contact. Systems capable of supporting this way of
doing business must be intelligent. The workers have access to diverse
information and decision-making tools needed for on-the-spot, quality
decisions. Worker empowerment without the best available information
resources will not work, and redesigned business processes will fail.
Next generation systems require next generation development methods Developing business applications from business objects is radically different from the current state of the practice of software development. Object-oriented project management approaches require different kinds of life cycles, deliverables and technology support. In his seminal work, Object-Oriented Programming, Brad Cox12 spoke of a software revolution that would result from the use of Software-ICs (analogous to hardware integrated chips). A new software industry will flourish. Software factories will fabricate, customize and assemble software from standard, reusable parts. Cox's software revolution is well under way.
TQM++
With billions of objects distributed everywhere, and with atomic-sized
objects being reused over and over, businesses cannot afford to have
less than 100 percent reliable objects comprising their information
systems. Consider the quality assurance methods that have been
developed, deployed and demanded in the aeronautics field. The lives of
millions of human beings depend on 100 percent quality in each atomic
part of a modern airliner. At 35,000 feet, 99 percent quality in atomic
parts is totally unacceptable.
What are the acceptable risks when a corporation's information infrastructure is made up of billions of objects that communicate in cyberspace? Total quality management, a buzzword of the early 1990s, takes on a very serious role in distributed object systems. To construct the new world of object-oriented information systems, defect-free components are absolutely essential. True total quality management (not the management platitude) is absolutely essential.
21st century corporations will be learning organizations Arie DeGeus of Royal Dutch/Shell observed, "We understand that the only competitive advantage the company of the future will have is its ability to learn faster than its competitors."13 The next generation of computing will not come easy. Both business and technology professionals must learn how to think in fundamentally new ways. Mentoring and team learning are essential to building a learning organization capable of keeping up with the rapidly evolving object technology and business engineering. In his book, The Blueprint for Business Objects,14 Peter Fingar makes it quite clear that "corporate training as usual" no longer applies. As noted by Peter Senge, systems thinking is at the core of the new corporate curriculum, and business and technology professionals must learn general systems thinking.15
The bottom line
We are on the threshold of the next generation of business. The next
generation of technology -- distributed object computing -- supplies the
back-drop for the next generation of business practice. The challenge
to businesses that wish to excel in the 21st century is to develop an
evolutionary approach to bridge the two worlds of today and tomorrow,
along with the two realities of business and technology.
|
If you have technical problems with this magazine, contact webmaster@sunworld.com
URL: http://www.sunworld.com/swol-04-1996/swol-04-oobook.html
Last modified:
This has moved to a new location: http://www.sunworld.com/swol-04-1996/swol-04-oobook.glossary.html
Additional Resource
Suggested Readings