|
Caching to relieve bandwidth congestionHow do emerging caching solutions work? What are the considerations for designing an effective cache infrastructure?
|
The use of caching to help control network traffic jams has been gaining more and more momentum. Vendors are introducing new products left and right. How exactly does caching work through the various levels of an intranet out to the Internet? What problems remain unsolved? SunWorld contributor Robert E. Lee answers these questions and more. (2,000 words)
Mail this article to a friend |
ver and over again the cry goes out -- when will Internet response time improve? When will backbone and access pipes respond the way manufacturers, Internet service providers (ISPs), and industry evangelists claim they will?
The bigger issue is really one of preserving the limited bandwidth available in any network across the boundaries of server direct connection spaces. Bandwidth costs (and, it seems, the demand for data) grow exponentially as distance increases.
This is where the model of a caching solution comes to the forefront.
Caching considerations
It's helpful to understand the nature of the network and the
elements that are driving the market to require caching
technologies. According to Jim Balderston, an industry analyst at
Zona Research, this technology is gathering importance. Within a
year it will be on the list of technology components to be purchased
when setting up a wide area network.
Today, caching addresses the redundant movement of static information across any wide area network. The Internet is the root application that drives the distribution of static information. As this technology has grown, especially with the inclusion of tens of millions of non-technical users with terabytes of personal Web space, the Internet has taken a nose dive in performance. This has resulted in the launching of private networks and the Internet2 initiative to put a segregated network in place for those organizations and applications that require high-speed dedicated bandwidth.
In Figure 1 you can see how the structure of the Internet requires an increasingly large pipe as you go deeper into the core of the network. As a consequence, costs escalate because the core of the network is also the portion with the longest distance to travel. In the beginning those costs were borne by the public infrastructure and subsidies that founded the Internet. But commercialization has exhausted the excess capacities of the original Internet and brought its seemingly unlimited scalability to a grinding halt.
Figure 1: Internet bandwidth aggregation |
On the corporate networking front, the excruciating cost of the backbone has opened the door to a more intelligent method of distributing content across organizations. While bandwidth in the United States remains a bargain, cross any international boundary and the pain threshold in the pocketbook demands more effective distribution.
As each client, represented in blue, places a request to the Internet for a piece of static information, that request is passed through to the server hosting the pages. If multiple clients are requesting the same information, the requests are not aggregated at the local nodes, in this case colored red. They are passed through to the regional node, green, and on to the backbone nodes, purple.
The initial implementation of caching technology reduced bandwidth by placing caching at the client or the red or green levels of the servers. Caching at the client level addresses individual requests for information that are redundant, but not from the organization. Caching at the red level lowers the number of requests for a small segment of the clients and reduces the initial requirement for connection bandwidth into the Internet environment. This is helpful all the way through the system because the number of clients served by the caching server reduces total requests.
|
|
|
|
Unsolved problem
Unfortunately, the problem isn't solved that simply. To begin with,
the number of caching servers in the network is increasing. Left at the red
level, the demand for data across the backbone of the network is still too great.
The present solution is to introduce caching servers farther up the line, which solves
the problem by reducing the total number of requests going up
the line, but a complete solution is still very far away.
Because the purveyors of the Internet have run amok with solutions to provide highly interactive and dynamic content, the initial concept of caching has encountered problems. Content developers are faced with the need to keep Web clients interested in a site by plying them with personalized content. Following are just a few of the protocols that stem, in whole or in part, from that need: stream audio and video, telephony over IP, advertising push, database-driven content, and dynamic page content. These technologies create nothing less than a new page at every access, which means that original design for caching no longer lessens the demand for bandwidth.
Emerging solutions
A brave new world of automation and intelligence in the network is
emerging to solve this problem. The solutions span the gamut from
intelligent servers to sophisticated content to advances in routing
technologies.
It helps to picture possible solutions through a graphical model of the caching environment, shown in Figure 2. Here you have a corporate wide area network that is connected to the Internet. At the headquarters site we find an array of cache servers, whose function is to cache all requests for Internet data for all locations of the corporation. The second major site in the corporation also has a caching array to cache significant Internet and intranet requests. Each of the other sites has a simple cache server based on the lower demands of these locations.
Figure 2: Graphical model of the caching environment |
This framework introduces a series of concepts that need to be understood. The first is the concept of multiple caches within a single domain. Under this construct, the Internet Caching Protocol (ICP) provides a tool for a local cache server to query other caching servers for the presence of valid content. If the content is located within the corporate network, the local caching server is updated accordingly, and traffic only flows within the intranet. If the requested content is not found within the intranet, the query is passed out to the Internet. This technique alone can significantly reduce the number of requests over the Internet circuit.
Increasing the amount of cache at the local site further reduces the need for wide area bandwidth. A frequent problem in the process of increasing cache is the idea that the cache server must be taken down in order to add memory, disk, or CPU resources. Instead, with caching array technology, additional servers can be added to a live caching environment. With simple administrative tasks completed, the additional caching server is integrated into the array with no interruption to the users.
Out to the Internet
Since the Internet is or will be an integral part of any network,
good design recognizes the need to coordinate the caching efforts of your
internal caching servers with those of your ISP.
Shiva Mandalam, product manager for Sun's Netra Proxy
Cache Server, says that 85 percent of all Internet content is static in
nature. David McNeely, product manager for Mission Control at
Netscape, says, "Many large sites have determined that the compute
performance required to continually create content dynamically for
each connection is so great that it makes sense to serve as much
content as possible statically."
These conditions have opened the door to innovative solutions that extend beyond the local ISP. Certainly, it's necessary to factor in the caching ability of your ISP, since it is also concerned with the preservation of bandwidth and quality of service to all of its clients. But the ISP may not have the resources to cache enough of the Internet to meet your needs. Enter companies like Mirror Image, whose Central Cache Connection acts to expand the potential cache hit rate. Mirror Image contends that, "The overall performance of a caching solution is greatly determined by the size of the user base accessing the Internet. Cache 'hit rates' increase as the user population increases. Corporations and ISPs have a finite user population. Hence, cache hit rates are usually limited to the 30 percent to 40 percent range...Central Cache is [then] consulted. Since this cache is shared by a large number of customers, the likelihood of a hit is greatly increased, approaching 70 percent to 75 percent."
Still more design issues
Does it seem that the concept of designing an effective cache might
be getting out of hand? It should. You have to take into account
the local cache on the users' machines, the cache in the local network,
the cache at key points in the corporate network, the cache of the ISP,
and now the cache services available through dedicated caching providers.
A number of questions begin to arise.
How distant are the caching servers?
Remember, in addition bringing the cache hit rate up to 70 percent or better,
we're always seeking to preserve bandwidth. If the centralized caching
services are too distant, they will be of little or no use to the
design, as the cost of the link or latency of the network will lower the
quality of service.
What are the bandwidth bottlenecks to centrally located caching
servers?
While Mirror Image touts that it has a much larger user base, hence
increasing the probability that a page request will be served from
cache, this larger user base will demand more bandwidth to provide
acceptable quality of service. This solution moves the
issue of backbone bandwidth from the Internet in general to the
subnet that evolves from the path between your site and the caching
service.
Is the content being cached cache aware?
Here's another emerging concept that will affect the caching process:
content that defines its ability to be cached. Through HTTP 1.1
standards, it is now possible to define the validity of content
based on time. Provided the caching server recognizes this standard,
as Sun and Netscape's solutions do, expired content can be refreshed
by the cache server as needed, increasing the reliability of the
cache.
This standard also lays the foundation for proactive caching fetches that anticipate the need for updated content. If a page of content is dynamic in nature, but served out as a static page with a timeout of 15 minutes, and the caching server determines that this page is frequently retrieved, the caching server can decide to pre-fetch the page during idle periods of traffic, maximizing the use of bandwidth.
Does the cache server recognize page objects and intelligently cache
static objects?
This last question draws on the intelligence of the caching server
to break the content of a page into individual objects and treat
each appropriately. Take the example of a page that contains an ad banner,
which needs to be refreshed each time a page is viewed. It used to be that
the entire page would need to be retrieved to satisfy each request. Today's
caching servers can distinguish between static and dynamic segments
of a page and reconstruct it by caching the static area and fetching
the dynamic area.
Solution market
There are a number of solutions on the market today. While this
article isn't going to attempt to review them, follow the links
below to find an option appropriate for your network. The
bottom line is that caching is entering an age of significance in
the network. With protocols like ICP, HTTP 1.1, and CARP (proxy
arrays), plus the emergence of services from your ISP and companies
like Mirror Image, users can retrieve content across the Internet or corporate
backbone with a definite reduction in latency and bandwidth.
Your challenge: Define the best combination of caches for your users and begin building out.
|
Resources
About the author
Robert E. Lee is a technology consultant, speaker, columnist,
and author who has been in the computer industry for 20 years.
He specializes in networking, Internet strategies, systems analysis,
and design activities, and has participated in the Windows NT
and Internet Information Server betas since their respective beginnings.
His most recent features for SunWorld were
"Cobol programming in the Java world," and
"Untangling network wiring."
Reach Robert at rob.lee@sunworld.com.
If you have technical problems with this magazine, contact webmaster@sunworld.com
URL: http://www.sunworld.com/swol-06-1998/swol-06-caching.html
Last modified: