Death of the PC?
New PC technologies and dropping CPU prices are making PCs more attractive than ever. So why would you switch to Web PCs or virtual PCs?
In this second installment on the future of PCs, we look at new PC technologies that are making them hard to resist. But the reality of Web PCs and virtual PCs is not so distant now. What are their advantages over PCs and what will happen as they make their way into the market? Plus a sidebar on the results of our Web PC survey. (1,900 words)
Last month we raised the question "Will we use PCs in the future the same way do today?" We started by looking at the new technologies that may make PCs irrelevant for the masses, leaving PCs for the few power users. When we last left off, I was partially through a list of new technologies that will impact the PC market. We already covered Web PCs and virtual PCs. We now look at software emulation.
The PowerPC helped focus attention on emulation. With Insignia Solutions' first release of SoftPC for the Macintosh, Mac users were able to run small and simple DOS-based PC programs. But the emulation in most cases was quite slow and painfully limited. When Apple decided to work with IBM and Motorola on the new PowerPC specifications, however, Apple began to understand the limitations of the 680x0 processor family. The company quickly jumped on the wagon to create a powerful enough 680x0 emulation system to maintain compatibility with the thousands of available Mac software packages. Emulation technology is now quite effective.
Apple's dependence on PowerPC has resulted in new multiprocessing systems that outpace the fastest Motorola Macs available a year or two ago. The limiting factor in these systems is still the MacOS, which shares some of the problems that Windows 3.x has in its OS design. The new version of MacOS (Copland) has not yet hit the market and is still undergoing constant change.
Nevertheless, the emulation in Apple PowerPC systems of older Macintosh software is almost seamless. Insignia Solution's new SoftWindows products can even emulate Windows95, a very recent addition to the general desktop world. Despite the possibility of a few minor problems, users no longer have to deal with failing applications as a result of obscure differences and incompatibilities that existed just a few years ago.
What about new PC technologies?
Just as there are a few non-PC or nouveau PC technologies that hold promise, there are equally as many new PC technologies that improve the desktop. For example, Microsoft is now upholding a new standard that uses its Universal Serial Bus (USB) technology to allow faster peripheral device communications.
This new technology allows external desktop bus speeds of up to 400 megabytes per second. Instead of just hooking a keyboard and mouse to the desktop bus, you will be able to hook up VCRs, camcorders, stereo equipment, and other peripherals. At one time, each of these components would have required separate, special boards and connectors. A small PC can now also act as a digital recording unit on the road. The components for this technology are quite cheap although you would require new camcorders, VCRs, etc., which must be capable of this level of control.
One thing we cannot ignore is the falling prices of CPUs. Pentium-100 systems which were the fastest around two years ago, now cost between $80 and $125. As Intel pushes out newer chips every few months, the current CPU stock is dropping to commodity prices. Intel seems to think that $1,000 is a good starting place -- the 486, Pentium, and Pentium Pro (P6) all started around that price point in their early days.
In two years, these falling prices can result in a $500 Pentium-100 with eight to 16 megabytes of RAM and a one-gigabyte hard drive, which by today's standards would be more than sufficient for the average desktop user. Common applications and packages such as Microsoft Office (in its current incarnation as Office95) will run very comfortably in such a PC. Keep in mind that as the average desktop gets more powerful so do the applications, meaning developers are creating more memory and storage consumptive systems as time passes.
So why would I get something else?
If users could choose their own desktop computers with their organizations footing the bill, they would all be zooming through massive Excel spreadsheets on multiprocessor systems with 21-inch monitors and 128 megabytes of RAM. In reality however, organizations need manageable (and of course cheap) desktop units that will also allow users to be as productive as possible.
With Web PCs and virtual PCs, less complex units mean fewer management issues. This consequently results in fewer headaches for the technical staff and lower costs to the business staff. In a skewed form, they provide the possibility of centralized management. Since Web PCs are so new, we cannot yet measure the exact level of management required for these units. IBM, Sun, Oracle, and others are nonetheless reassuring people that the units will be easier to maintain and manage. The low cost of the units means that "fixing" a unit actually means replacing the entire box without any additional need for moving hard drives to the replacement PC. The bottom line is you may save on maintenance.
If you have ever taken apart a PC, you are well aware that things that seem straightforward to replace can actually take hundreds of hours. Even with my decade of experience with these little monsters, I recently managed to wipe out my Gateway Pentium's boot drive when I attempted to add a new hard disk. Thank goodness it only contained software and none of my precious data available. This event wasn't one of embarrassment but of angered frustration (I wasted 4 days on it!). PC parts simply do not behave the way they are supposed to.
In fact, not to sound sour, but the quality of most PC parts is at a level much lower than that of workstations. It's situations like this that make me understand why Unix workstations and parts come at a premium. Intel has even acknowledged that its PCI chipset has flaws that can severely limit the bandwidth of the main bus. Although the top PC vendors like Dell, Gateway, IBM, and HP take pains to make sure that their systems are in good order, the truth appears to be that the level of quality parallels the price.
Does that mean a $500 system is lower quality? Not necessarily. The problem is not with individual components, but with the massive number of different components which must be compatible with each another. The aging PC platform does not always allow such a level of near utopia. A closed system, like a Web PC has severe limits on add-ons or upgrades (unless you replace whole unit), so it has fewer compatibility issues.
One other significant problem is the continual "bloating" of PC operating systems to support the ever-increasing demand for features and capabilities. Microsoft, IBM, and Apple build their OSes to be self-sufficient so they don't require external components to function properly. This only works if you assume that the system won't be dependent upon other systems on the network.
You need more RAM to support a larger operating system. Don't be misled by marketing words like "it's a compact microkernel OS." General purpose OSes can be huge compared to special-purpose or dynamically-extensible OSes. This is one reason why NT requires 16 megabytes to function minimally and 32 megabytes to be happy.
Comparatively, Web PCs absolutely require a network in place. The core unit itself has a small kernel and system support libraries on ROM and can be extended over the network by loading new classes into itself using Java network class loaders and remote execution objects. This can help explain why a system with eight megabytes of RAM can perform just as well as another with 16 megabytes. However, programmers these days take memory for granted and are often not very attentive in producing low-memory consumption applications. So there's a limit to what an eight-megabyte Web PC can do for high-end applications.
If (a very big "if") the Web PC concept catches on, then the low end of the PC market could be replaced with these components. As we indicated last month, there have been trials for low-end desktop units for the masses. However, it is only now that hardware technology as well as important system technologies like Java are coming to life. There is a possibility that Web PCs can be successful in time.
Meanwhile, actual deployment of Web PCs is questionable. This is a big unknown for most companies. They see the potential benefits of reduced management but do not have sufficient proof that the system can replace the current software in use. This is a significant problem if vendors spend too much time building hardware systems and never actually get around to reinforcing bold new strides with useful software.
Following the trend of scaling computing power, PCs on the high end are now in tougher competition with the workstation class systems at a lower-end price. High-end PCs from Intergraph with improved graphics hardware compete well with heavy-duty SGI and Sun systems for graphics processing. If this goes on, the workstation and server market could fall into danger. It is more probable that the high-end PCs will maintain their current status as power-user desktop systems. As long as this arena exists, PCs will probably endure through the end of the millennia and possibly into the next one as well.
Although we may eventually see a major fallout in the PC market, there will still be a huge market for them judging from the hundred million there are now. Until we discover whether the Web PC concept will actually work, the desktop computer we will use may be yet another PC.
If you have technical problems with this magazine, contact email@example.com
One out of every four IT managers, in medium to large companies, is thinking about taking this step because of cheaper hardware and support and greater user productivity concerns. Still this is surprising considering the fact that this is technology we have not even seen implemented on a large scale, and this technology is barely out of alpha stages.
For those who may not understand the relative importance of PCs and PC connectivity issues in the enterprise, there is an overwhelming majority of PC systems to Unix computers, as shown in the chart below. The implication of a replacement methodology to PC-based client/server computing is relatively important when considering the cost savings it would imply.
Number of respondents: 320
as of 8:00 a.m. on April 30, 1996
Yes: 27.1% No: 64.4% I don't know: 8.5%
Yes: 22.9% No: 54.9% I don't know: 22.2%
Not considering PC replacements: 52.7% Cheaper initial hardware price: 11.4% Cheaper initial software price: 0.7% Cheaper hardware maintenance: 2.3% Cheaper software maintenance: 8.1% Cheaper personnel-relate support costs: 10.7% Greater user productivity: 10.4% Other: 3.7%
less than 10% are PCs: 8.2% 10% - 25%: 11.0% 25% - 40%: 7.6% 40% - 75%: 21.1% more than 75% are PCs: 50.8% I don't know: 1.3%
less than 10% are PCs: 4.5% 10% - 25%: 7.0% 25% - 40%: 6.7% 40% - 75%: 20.4% more than 75% are PCs: 57.2% I don't know: 4.2%
1 - 9: 22.8% 10 - 29: 24.7% 30 - 99: 26.3% 100 - 499: 23.7% Not applicable: 2.5%
1 - 9: 7.0% 10 - 29: 6.6% 30 - 99: 9.2% 100 - 499: 17.7% 500 - 1999: 18.7% 2000 - 9999: 21.2% more than 10,000: 16.8% Not applicable: 1.3% I don't know: 1.6%
About the author
Rawn Shah is vice president of RTD Systems & Networking, Inc., a Tucson, Arizona based network consultancy and integrator. Reach Rawn at firstname.lastname@example.org.