|
The education of an IT architectThis month: IT architect Glenn Kimball shines a little light down the road to glory, as a reader seeks advice on how and where to earn his IT credentials. Plus: Jason May speaks his mind on the ("dead") DCE initiative; Performance Q&A with Adrian Cockcroft; and more |
Mail this article to a friend |
Hands on, or by the book?
Hi Glenn,
Do you have any advice for developers who wish to become IT architects? I feel I need to have hands-on experience before claiming to understand a technology, yet I've noticed that IT architects tend to get the understanding without the hands-on development experience. What are the best ways to gain experience? Seminars? Which kind? Vendor presentations? Books? Trade journals?
John Hilgart
John,
Your thoughts are true: Experience is the best way to gain the knowledge
necessary to architect enterprises, but this generally takes a long
time. The fundamental situation is that the vast majority of architects
have roots within a specific technological sphere (i.e., databases,
middleware, UI, the Web, etc.). Most have built an architectural support
structure around their core competency, itself the product of
experience, seminar attendance, and good old fashioned research. The architect
position is a lofty goal for most. It requires that experience and
skill development occur simultaneously, even as you build on your
knowledge of new technologies and techniques for solving problems.
Gaining knowledge to support your role as architect will always be difficult.
My advice is to do what comes naturally to you when you're in the mode of
developing knowledge. If you crave hands-on experience, go for it, but make
sure you keep an eye on new technologies and how they affect your
environment. Develop a specialty area and stick to it, build other
technologies to support it, and never forget that an understanding of business
issues is paramount. If you don't provide a system that solves a problem you've
probably provided a system that is a problem.
Glenn Kimball
DCE is dead: Here's why
Jason,
At the end of April's IT Architect column in the table called "A brief middleware taxonomy," you comment: "DCE is dead. Avoid." Will you elaborate on that?
John Fauerby
John,
The DCE initiative of the OSF got started almost 10 years ago, and
it never really went anywhere. DCE was from the beginning a
bloated, unstable mishmash that didn't run effectively on any
platform. Vendors that initially planned to release DCE software
have gradually faded away, and there aren't any viable firms left
doing it. The OSF never realized that the PC would be the
enterprise desktop of choice, and the only PC DCE product on the
market came from a small unknown firm (Gradient).
None of the major deficiencies of DCE (such as code bloat and the
lack of an asynchronous communication API) were ever remedied.
Firms that made major investments in DCE are now stuck with a heap
of unsupported and unsupportable code.
There is absolutely no good reason for anyone to even consider DCE
today. We now have a host of lightweight, specialized middleware
options -- MOM from IBM, Tibco, Tuxedo, the CORBA products, and of
course Microsoft's DCOM. I'm a little worried that
Corba could go the way of DCE, but vendors appear to be avoiding
the excesses of DCE, and are cooperating as they never have before.
Unfortunately, there is no clear leader
in this space -- no "strategic" middleware option that provides a full
set of application communications functionality, performs well, is
backed by a strong company, and has a clear direction for the
future. That was the promise of DCE, but it failed badly.
Jason May
Fitting middleware into the picture
Jason,
How do the architectures from TINA-C and TMN, and the frameworks from the Open Group and the Network Management Forum (the Management Systems Framework) fit into the middleware picture? Is middleware just a subset of these?
Vic Charlton
Vic,
The organizations (TINA, NMF) and standards (TINA-C,
TMN) you mentioned are all related to provision of telecommunications
services and telecom network management. The standards that these groups are
defining absolutely include middleware elements, but also appear to
include specifications for the various network elements, management
requirements, service level expectations, and so forth.
As these specifications are implemented by the various consortium
members, it will be necessary to drive the high-level architecture
principles that are provided down to the level of specific technology
selections. If the various consortium members' systems are expected
to interoperate, it will be necessary at some point to make a common
decision between MOM, RPC, CORBA, DCOM, and so forth.
Jason May
CPU on reserve
Adrian,
Is processor partitioning possible on single-CPU Solaris 2.6 systems? Among the hundreds of systems I am working on, each has a single embedded SPARC processor running Solaris 2.6. We have many different applications running, and it would be very convenient to limit each application to a percentage of the available CPU power (i.e., application A would only be allowed to consume X percent of the CPU regardless of the load on the system).
Currently, one process can occasionally consume 100 percent of the CPU, which causes problems when another application also needs significant CPU to perform time-critical work. The processes do real-time data acquisition, and a few key processes already run with real-time priority.
The idea is to "reserve" a percentage of the CPU for each application, thereby ensuring that an overloaded CPU condition does not occur.
What do you think?
Dave Wright
Dave,
What you want is a share scheduler. Sun has announced plans to implement
one, but it's not yet available and requires detailed changes to the
Solaris kernel to implement. The product we've been working on
is Share II from Softway (www.softway.com.au).
However, if you have a lot of small systems, it may be too expensive as
a per-system option.
Adrian Cockcroft
Disk fragmentation: how, why, and do we care?
Adrian,
How does Solaris handle disk fragmentation? Does it automatically defragment periodically? If not, how do we deal with it, or do we care?
Ronald Kwok
Ronald,
UFS file systems are always slightly fragmented. UFS was not designed to
handle very large files. I recommend using Veritas VxFS for efficient
access to gigabyte-sized files. Sun sells this as an option.
Another option, if you create files and leave them in place for a long time,
is a third-party disk defragmenter/sorter called Eagle DiskPak. I think
you can download a demo that will let you see how fragmented UFS is
from www.eaglesoft.com.
Adrian Cockcroft
Right on the mark
Peter and Carole,
I don't get to read SunWorld as often as I'd like, but I'm never disappointed when I do. Your recent column regarding protection of Web servers is greatly appreciated. Moreover, it was precise, timely, and understandable. Keep it up.
Kip Knight
|
If you have technical problems with this magazine, contact webmaster@sunworld.com
URL: http://www.sunworld.com/swol-06-1998/swol-06-letters.html
Last modified: