Borland takes the high road
What Borland's acquisition of Open Environment enterprise client-server tools really means
With Delphi, Powerbuilder, and others reaching maturity in the field, and the margins gone completely out of the low-end tools business, Borland wants to find growth in the enterprise tools market. (1,700 words)
A few years ago, Microsoft choked the life out of Borland International's remarkable low-end programming language tools business with a combination of aggressive pre-announcements, iron-fisted distribution channel control, and adequate technology. Since then, Borland has attempted a series of comeback projects that, somewhat unusually, have taken the vendor up the chain from traditional language tools for individual programmers to client-server. InterBase, a relational database Borland acquired in the late 1980s, had an intriguing peer-to-peer data distribution model, but it failed to catch fire. However, Delphi, its client-server development tool -- though a relatively late entry in a market assumed to be sewn up tight by Powersoft (now Sybase) -- surprised many of us by proving very popular among corporate programmers.
Second-generation client-server development goes beyond individual LANs to encompass multi-server architectures, heterogeneous platforms, widely distributed data, and other aspects of enterprise computing. As client-server implementations proliferate, and as increasingly complex applications require interoperability of client-server with legacy infrastructure, there's plenty of opportunity for useful abstractions and development disciplines to help designers cope. But what abstractions? What disciplines? The answers are far from clear. Nevertheless, Borland has decided to stake out important territory in this field by acquiring Open Environment Corp. of Boston in May.
The progress of application development technology has always gone from dogma to practical tools. Concepts are innovated, nailed down, and put into practice; the best ones are eventually embodied in software tools. In the 60s, this meant high-level languages (3GLs) and compilers; in the mid-70s, it meant structured design methodologies and CASE tools. One of the new pieces of religion is "three-tier client server." This concept, first floated by Cambridge Technology Group, is the basis for Open Environment's tool architecture.
Three-tier client-server began as a buzzword representing a mundane idea. Typical client-server applications consist of user interfaces on the client side, relational databases on the server, and application logic somewhere in between. The conceit is that "two-tier" client-server constrains the application logic to be on the client side, a scheme that has several disadvantages: the application code is underpowered (e.g., running on a PC instead of a RISC server), the client is harder to manage, and network traffic is excessive. Three-tier client-server, on the other hand, supposedly solves these problems by providing an application layer above the database but still on the server.
Big deal. It has always been possible to put application logic on the server; it's just that application tools did not support doing so. The limitation in particular was the standard SQL languages that early relational database systems included. Standard SQL is essentially an implementation of set theory, which is a computing model of limited power. It's impossible, in general, to implement application logic in standard SQL. Later implementations of SQL, such as Sybase's Transact-SQL and Oracle's PL/SQL, added things like loops, conditionals, and stored procedures, that made it possible to embed application logic in SQL code. However, just because it's possible to implement application logic in SQL doesn't mean that SQL is the right environment in which to do it.
There is, in my view, little value in attaching a fancy name to the idea of putting application logic on the server. Anyone with a C compiler can do this. The real value of so-called three-tier client server is in separating the concerns of application logic and data access, two potentially quite different types of computing tasks with correspondingly different system requirements. Appropriately, since the "three-tier" concept was introduced, it has been generalized to multi-tier client-server. This means architecting applications by separating them into components that have unique computational characteristics. User interfaces and databases have clearly different sets of concerns; so might a number-crunching portion of an application, a high-volume transaction processing component, and so on. This is hardly front-page news to those of us who have been designing large-scale systems for a while; the terminology simply reflects the idea that we can put different components on different machines on a network.
Abstracting away RPCs
However, we have not had much help from tools in architecting such systems, and that's where Open Environment comes in. Open Environment's principal development tool is called Entera, for "enterprise architecture." They use the term intelligent middleware to describe it. Entera's root technology consists of two layers: remote procedure calls (RPCs) and abstract interfaces. Essentially, Entera embodies a distributed object model that they conceived from the bottom up, as opposed to the Common Object Request Broker Architecture (CORBA) model, which was conceived from the top down.
Remote procedure calls are the lowest useful common denominator of distributed computing, a simple way of utilizing code on one machine from another. RPCs were originally invented to facilitate remote execution across Unix machines in a way that overcomes such low-level differences as machine word size and byte ordering. Each machine has a low-level type model that specifies the physical representation of an integer, floating-point number, etc. That's not very sophisticated, but it's enough to facilitate basic communication between machines -- enough, for example, to allow routines in C (or any language that follows C language calling conventions) to communicate across multiple platforms.
This kind of physical- or representation-level RPC is sufficient, for example, for sending a list of parameters off to a number-crunching routine on a compute server. But it's not a rich enough mechanism to describe the complex types of data that must bang around the network in today's enterprise applications.
The solution is to add abstract data type specification on top of the physical-level RPC. As I've said before, abstract data typing is the most important concept to emerge in software engineering since the 3GL. An abstract data type, no matter what syntax is used to express it, is nothing more than a description of a type's behavior, as opposed to its physical representation. Pragmatically, this means specifying the state of, and the operations you can perform on or with, an object of the given type. The idea is to specify these things in a way that makes sense to any piece of code that needs to use an object of the type, but that is independent of the way the behavior is implemented. Thus, the abstract type definition is the ideal way to shield application code from the individualities of different machines, operating systems, etc. -- so that programmers need not worry about such details.
(Those of you who know object-oriented programming should see where this is going. Object-oriented programming is nothing more than abstract data typing plus the mechanism of inheritance. Some people even feel that inheritance is superfluous and that objects equals abstract data types.)
In any case, Entera's second layer consists of abstract interfaces -- another name for abstract data typing. Entera provides a language called IDL, for Interface Definition Language, for writing descriptions of abstract interfaces. An IDL compiler automatically generates the code necessary to implement the abstract interfaces.
RPCs or distributed objects?
So far, all of this sounds like an implementation of CORBA. Indeed, while the IDL language is not the same as CORBA's IDL, a lot of Entera is redundant with CORBA implementations like Orbix from IONA Technologies. Entera goes beyond that, in two important ways. First, Entera can run on top of DCE, which is ported to a wider range of platforms than any CORBA implementations.
Second, while CORBA is a static type model that focuses on type specification rather than runtime concerns, Entera includes tools for managing application scheduling and execution. Entera's AppMinder lets you group code modules into configurations that represent entire applications. It also lets you assign modules to run on particular machines. At runtime, AppMinder makes periodic checks on each component of an application and, if any components are not running, attempts to restart them or reassign them to alternate machines. AppMinder is actually a fairly simple tool, but its function is vital in recapturing management simplicity amid the complexity of distributed application components.
Entera provides a lot of support for managing enterprise-wide applications and removing the need for programmers to worry about the grungy application details. Its chief drawback is that its type description language is not based on CORBA, the emerging standard. I question the wisdom of developing abstract interfaces now in a nonstandard type definition language. Open Environment will need a risk-free -- preferably automated -- migration strategy to CORBA.
Borland needs to ensure its new acquisition makes the transition to CORBA. After all, they have been on the short end of object model wars before -- Borland's OWL (Object Windows Library) lost out to Microsoft's MFC (Foundation Classes) in the area of C++ class libraries for Windows. Borland needs to be among a community of vendors centered around open standards; this will ensure them a niche in a healthy market. Borland should know by now that they are too small to carry this type of market by themselves.
Still, there is much strategic value in Borland's acquisition of Open Environment. Borland can now do the proverbial "end run" around Microsoft to the enterprise end of the application development tools market, while Microsoft hardly has a tools strategy at the departmental client-server level. Borland should now be firmly established as a competitor to Sybase, Oracle, Sun, etc., but with a different angle. Instead of those vendors' emphasis on downsizing from mainframes, Borland will be positioned to help those organizations that got their businesses up on PCs with Turbo C and Paradox, and grew to client-server with Delphi, grow further to enterprise client-server with Entera. It's a fascinating combination that should inject new life into Borland.
If you have technical problems with this magazine, contact email@example.com