Here's a practical guide on how to protect your networks
What are the terms you will hear in discussions of Internet firewalls? What types of firewall architectures are used at sites today? What are the components that can be put together to build these common firewall architectures? Here's a detailed look at major firewall concepts and design issues. Plus: A look into the future, firewall definitions, and a discussion of private IP addresses. (8,000 words including two sidebars)
The need for firewalls no longer seems to be in question today. As the Internet and internal corporate networks continue to grow, such a safeguard has become all but mandatory. As a result, network administrators increasingly need to know how to effectively design a firewall. This article explains the basic components and major architectures used in constructing firewalls.
The "right solution" to building a firewall is seldom a single technique; it's usually a carefully crafted combination of techniques to solve different problems. Which problems you need to solve depend on what services you want to provide your users and what level of risk you're willing to accept. Which techniques you use to solve those problems depend on how much time, money, and expertise you have available.
Some protocols (such as Telnet and SMTP) lend themselves to packet filtering. Others (such as., FTP, Archie, Gopher, and WWW) are more effectively handled with proxies. (We devote an entire chapter of our book Building Internet Firewalls to describing how to handle specific services in a firewall environment.) Most firewalls use a combination of proxying and packet filtering.
Before we explore various firewall architectures, let's discuss two major approaches used to build firewalls today: packet filtering and proxy services.
Packet filtering systems route packets between internal and external hosts, but they do it selectively. They allow or block certain types of packets in a way that reflects a site's own security policy as shown in the diagram below. The type of router used in a packet filtering firewall is known as a screening router.
In addition, the router knows things about the packet that aren't reflected in the packet headers, such as:
The fact that servers for particular Internet services reside at certain port numbers lets the router block or allow certain types of connections simply by specifying the appropriate port number (such as TCP port 23 for Telnet connections) in the set of rules specified for packet filtering. (Chapter 6 in our book describes in detail how you construct these rules.)
Here are some examples of ways in which you might program a screening router to selectively route packets to or from your site:
To understand how packet filtering works, let's look at the difference between an ordinary router and a screening router.
An ordinary router simply looks at the destination address of each packet and picks the best way it knows to send that packet towards that destination. The decision about how to handle the packet is based solely on its destination. There are two possibilities: the router knows how to send the packet towards its destination, and it does so; or the router does not know how to send the packet towards its destination, and it returns the packet, via an ICMP "destination unreachable" message, to its source.
A screening router, on the other hand, looks at packets more closely. In addition to determining whether or not it can route a packet towards its destination, a screening router also determines whether or not it should. "Should" or "should not" are determined by the site's security policy, which the screening router has been configured to enforce.
Although it is possible for only a screening router to sit between an internal network and the Internet, as shown in the diagram above, this places an enormous responsibility on the screening router. Not only does it need to perform all routing and routing decision-making, but it is the only protecting system; if its security fails (or crumbles under attack), the internal network is exposed. Furthermore, a straightforward screening router can't modify services. A screening router can permit or deny a service, but it can't protect individual operations within a service. If a desirable service has insecure operations, or if the service is normally provided with an insecure server, packet filtering alone can't protect it.
A number of other architectures have evolved to provide additional security in packet filtering firewall implementations. Later in this chapter, we show the way that additional routers, bastion hosts, and perimeter networks may be added to the firewall implementations in the screened host and screened subnet architectures.
Proxy services are specialized application or server programs that run on a firewall host: either a dual-homed host with an interface on the internal network and one on the external network, or some other bastion host that has access to the Internet and is accessible from the internal machines. These programs take users' requests for Internet services (such as FTP and Telnet) and forward them, as appropriate according to the site's security policy, to the actual services. The proxies provide replacement connections and act as gateways to the services. For this reason, proxies are sometimes known as application-level gateways.
(Firewall terminologies differ. Whereas we use the term proxy service to encompass the entire proxy approach, other authors refer to application-level gateways and circuit-level gateways. Although there are small differences between the meanings of these various terms, in general our discussion of proxies refers to the same type of technology other authors mean when they refer to these gateway systems.)
Proxy services sit, more or less transparently, between a user on the inside (on the internal network) and a service on the outside (on the Internet). Instead of talking to each other directly, each talks to a proxy. Proxies handle all the communication between users and Internet services behind the scenes.
Transparency is the major benefit of proxy services. It's essentially smoke and mirrors. To the user, a proxy server presents the illusion that the user is dealing directly with the real server. To the real server, the proxy server presents the illusion that the real server is dealing directly with a user on the proxy host (as opposed to the user's real host).
Note: Proxy services are effective only when they're used in conjunction with a mechanism that restricts direct communications between the internal and external hosts. Dual-homed hosts and packet filtering are two such mechanisms. If internal hosts are able to communicate directly with external hosts, there's no need for users to use proxy services, and so (in general) they won't. Such a bypass probably isn't in accordance with your security policy.
How do proxy services work? Let's look at the simplest case, where we add proxy services to a dual-homed host. (We describe these hosts in some detail in the "Dual-homed host architecture" section of this article.)
As the diagram below shows, a proxy service requires two components: a proxy server and a proxy client. In this situation, the proxy server runs on the dual-homed host. A proxy client is a special version of a normal client program (i.e., a Telnet or FTP client) that talks to the proxy server rather than to the "real" server out on the Internet; in addition, if users are taught special procedures to follow, normal client programs can often be used as proxy clients. The proxy server evaluates requests from the proxy client, and decides which to approve and which to deny. If a request is approved, the proxy server contacts the real server on behalf of the client (thus the term "proxy"), and proceeds to relay requests from the proxy client to the real server, and responses from the real server to the proxy client.
Using proxy services with a dual-homed host
In some proxy systems, instead of installing custom client proxy software, you'll use standard software, but set up custom user procedures for using it. (We describe how this works in Chapter 7 of our book.)
A proxy service is a software solution, not a firewall architecture per se. You can use proxy services in conjunction with any of the firewall architectures described in the section called "Firewall Architectures" below.
The proxy server doesn't always just forward users' requests on to the real Internet services. The proxy server can control what users do, because it can make decisions about the requests it processes. Depending on your site's security policy, requests might be allowed or refused. For example, the FTP proxy might refuse to let users export files, or it might allow users to import files only from certain sites. More sophisticated proxy services might allow different capabilities to different hosts, rather than enforcing the same restrictions on all hosts.
There is some excellent software available for proxying. SOCKS is a proxy construction toolkit, designed to make it easy to convert existing client/server applications into proxy versions of those same applications. The Trusted Information Systems Internet Firewall Toolkit (TIS FWTK) includes proxy servers for a number of common Internet protocols, including Telnet, FTP, HTTP, rlogin, X11, and others; these proxy servers are designed to be used in conjunction with custom user procedures. (See the discussion of these packages in Chapter 7 of our book.)
Many standard client and server programs, both commercial and freely available, now come equipped with their own proxying capabilities, or with support for generic proxy systems like SOCKS. These capabilities can be enabled at run time or compile time.
There are a variety of ways to put various firewalls components together. Let's examine some of these approaches in detail.
Dual-homed host architecture
A dual-homed host architecture is built around the dual-homed host computer, a computer that has at least two network interfaces. Such a host could act as a router between the networks these interfaces are attached to; it is capable of routing IP packets from one network to another. However, to implement a dual-homed host type of firewalls architecture, you disable this routing function. Thus, IP packets from one network (such as the Internet) are not directly routed to the other network (such as the internal, protected network). Systems inside the firewall can communicate with the dual-homed host, and systems outside the firewall (on the Internet) can communicate with the dual-homed host, but these systems can't communicate directly with each other. IP traffic between them is completely blocked.
The network architecture for a dual-homed host firewall is pretty simple: The dual homed host sits between, and is connected to, the Internet and the internal network. The diagram below shows this architecture.
Dual-homed host architecture
Dual-homed hosts can provide a very high level of control. If you aren't allowing packets to go between external and internal networks at all, you can be sure that any packet on the internal network that has an external source is evidence of some kind of security problem. In some cases, a dual-homed host will allow you to reject connections that claim to be for a particular service but that don't actually contain the right kind of data. (A packet filtering system, on the other hand, has difficulty with this level of control.) However, it takes considerable work to consistently take advantage of the potential advantages of dual-homed hosts.
A dual-homed host can provide services only by proxying them, or by having users log into the dual-homed host directly. As we discuss in Chapter 5 of our book, user accounts present significant security problems by themselves. They present special problems on dual-homed hosts, where they may unexpectedly enable services you consider insecure. Furthermore, most users find it inconvenient to use a dual-homed host by logging into it.
Proxying is much less problematic, but may not be available for all services you're interested in. Some workarounds for this situation (as discussed in Chapter 7 of our book) do exist, but they do not apply in every case. The screened subnet architecture we describe in the next section offers some extra options for providing new and/or untrusted services. (For example, you can add to the screened subnet a worthless machine that provides only an untrusted service).
Screened host architecture
Whereas a dual-homed host architecture provides services from a host that's attached to multiple networks (but has routing turned off), a screened host architecture provides services from a host that's attached to only the internal network, using a separate router. In this architecture, the primary security is provided by packet filtering. (For example, packet filtering is what prevents people from going around proxy servers to make direct connections.)
The diagram below shows a simple version of a screened host architecture.
The packet filtering also permits the bastion host to open allowable connections (what is "allowable" will be determined by your site's particular security policy) to the outside world. The section about bastion hosts in the discussion of the screened subnet architecture later in this chapter, contains more information about the functions of bastion hosts, and Chapter 5 of our book describes in detail how to build one.
The packet filtering configuration in the screening router may do one of the following:
You can mix and match these approaches for different services; some may be allowed directly via packet filtering, while others may be allowed only indirectly via proxy. It all depends on the particular policy your site is trying to enforce.
Because this architecture allows packets to move from the Internet to the internal networks, it may seem more risky than a dual-homed host architecture, which is designed so that no external packet can reach the internal network. In practice, however, the dual-homed host architecture is also prone to failures that let packets actually cross from the external network to the internal network. (Because this type of failure is completely unexpected, there are unlikely to be protections against attacks of this kind.) Furthermore, it's easier to defend a router, which provides a very limited set of services, than it is to defend a host. For most purposes, the screened host architecture provides both better security and better usability than the dual-homed host architecture.
Compared to other architectures, however, such as the screened subnet architecture discussed in the following section, there are some disadvantages to the screened host architecture. The major one is that if an attacker manages to break in to the bastion host, there is nothing left in the way of network security between the bastion host and the rest of the internal hosts. The router also presents a single point of failure; if the router is compromised, the entire network is available to an attacker. For this reason, the screened subnet architecture has become increasingly popular.
Screened subnet architecture
The screened subnet architecture adds an extra layer of security to the screened host architecture by adding a perimeter network that further isolates the internal network from the Internet.
Why do this? By their nature, bastion hosts are the most vulnerable machines on your network. Despite your best efforts to protect them, they are the machines most likely to be attacked, because they're the machines that can be attacked. If, as in a screened host architecture, your internal network is wide open to attack from your bastion host, then your bastion host is a very tempting target. There are no other defenses between it and your other internal machines (besides whatever host security they may have, which is usually very little). If someone successfully breaks into the bastion host in a screened host architecture, he's hit the jackpot.
By isolating the bastion host on a perimeter network, you can reduce the impact of a break-in on the bastion host. It is no longer an instantaneous jackpot; it gives an intruder some access, but not all.
With the simplest type of screened subnet architecture, there are two screening routers, each connected to the perimeter net. One sits between the perimeter net and the internal network, and the other sits between the perimeter net and the external network (usually the Internet). To break into the internal network with this type of architecture, an attacker would have to get past both routers. Even if the attacker somehow broke in to the bastion host, he'd still have to get past the interior router. There is no single vulnerable point that will compromise the internal network.
Some sites go so far as to create a layered series of perimeter nets between the outside world and their interior network. Less trusted and more vulnerable services are placed on the outer perimeter nets, farthest from the interior network. The idea is that an attacker who breaks into a machine on an outer perimeter net will have a harder time successfully attacking internal machines because of the additional layers of security between the outer perimeter and the internal network. This is only true if there is actually some meaning to the different layers, however; if the filtering systems between each layer allow the same things between all layers, the additional layers don't provide any additional security.
The diagram below shows a possible firewall configuration that uses the screened subnet architecture. The next few sections describe the components in this type of architecture.
The perimeter network is another layer of security, an additional network between the external network and your protected internal network. If an attacker successfully breaks into the outer reaches of your firewall, the perimeter net offers an additional layer of protection between that attacker and your internal systems.
Here's an example of why a perimeter network can be helpful. In many network setups, it's possible for any machine on a given network to see the traffic for every machine on that network. This is true for most Ethernet-based networks, (and Ethernet is by far the most common local area networking technology in use today); it is also true for several other popular technologies, such as token ring and FDDI. Snoopers may succeed in picking up passwords by watching for those used during Telnet, FTP, and rlogin sessions. Even if passwords aren't compromised, snoopers can still peek at the contents of sensitive files people may be accessing, interesting email they may be reading, and so on; the snooper can essentially "watch over the shoulder" of anyone using the network.
With a perimeter network, if someone breaks into a bastion host on the perimeter net, he'll be able to snoop only on traffic on that net. All the traffic on the perimeter net should be either to or from the bastion host, or to or from the Internet. Because no strictly internal traffic (that is, traffic between two internal hosts, which is presumably sensitive or proprietary) passes over the perimeter net, internal traffic will be safe from prying eyes if the bastion host is compromised.
Obviously, traffic to and from the bastion host, or the external world, will still be visible. Part of the work in designing a firewall is ensuring that this traffic is not itself confidential enough that reading it will compromise your site as a whole. (This is discussed in Chapter 5 of our book.)
With the screened subnet architecture, you attach a bastion host (or hosts) to the perimeter net; this host is the main point of contact for incoming connections from the outside world; for example:
and so on.
Outbound services (from internal clients to servers on the Internet) are handled in either of these ways:
In either case, the packet filtering allows the bastion host to connect to, and accept connections from, hosts on the Internet; which hosts, and for what services, are dictated by the site's security policy.
Much of what the bastion host does is act as proxy server for various services, either by running specialized proxy server software for particular protocols (such as HTTP or FTP), or by running standard servers for self-proxying protocols (such as SMTP).
Chapter 5 of our book describes how to secure the bastion host, and Chapter 8 describes how to configure individual services to work with the firewall.
The interior router (sometimes called the choke router in firewalls literature) protects the internal network from both the Internet and the perimeter net.
The interior router does most of the packet filtering for your firewall. It allows selected services outbound from the internal net to the Internet. These services are the services your site can safely support and safely provide using packet filtering rather than proxies. (Your site needs to establish its own definition of what "safe" means. You'll have to consider your own needs, capabilities, and constraints; there is no one answer for all sites.) The services you allow might include outgoing Telnet, FTP, WAIS, Archie, Gopher, and others, as appropriate for your own needs and concerns. (For detailed information on how you can use packet filtering to control these services, see Chapter 6 of our book.)
The services the interior router allows between your bastion host (on the perimeter net itself) and your internal net are not necessarily the same services the interior router allows between the Internet and your internal net. The reason for limiting the services between the bastion host and the internal network is to reduce the number of machines (and the number of services on those machines) that can be attacked from the bastion host, should it be compromised.
You should limit the services allowed between the bastion host and the internal net to just those that are actually needed, such as SMTP (so the bastion host can forward incoming email), DNS (so the bastion host can answer questions from internal machines, or ask them, depending on your configuration), and so on. You should further limit services, to the extent possible, by allowing them only to or from particular internal hosts; for example, SMTP might be limited only to connections between the bastion host and your internal mail server or servers. Pay careful attention to the security of those remaining internal hosts and services that can be contacted by the bastion host, because those hosts and services will be what an attacker goes after--indeed, will be all the attacker can go after--if the attacker manages to break in to your bastion host.
In theory, the exterior router (sometimes called the access router in firewalls literature) protects both the perimeter net and the internal net from the Internet. In practice, exterior routers tend to allow almost anything outbound from the perimeter net, and they generally do very little packet filtering. The packet filtering rules to protect internal machines would need to be essentially the same on both the interior router and the exterior router; if there's an error in the rules that allows access to an attacker, the error will probably be present on both routers.
Frequently, the exterior router is provided by an external group (for example, your Internet provider), and your access to it may be limited. An external group that's maintaining a router will probably be willing to put in a few general packet filtering rules, but won't want to maintain a complicated or frequently changing rule set. You also may not trust them as much as you trust your own routers. If the router breaks and they install a new one, are they going to remember to reinstall the filters? Are they even going to bother to mention that they replaced the router so that you know to check?
The only packet filtering rules that are really special on the exterior router are those that protect the machines on the perimeter net (that is, the bastion hosts and the internal router). Generally, however, not much protection is necessary, because the hosts on the perimeter net are protected primarily through host security (although redundancy never hurts).
The rest of the rules that you could put on the exterior router are duplicates of the rules on the interior router. These are the rules that prevent insecure traffic from going between internal hosts and the Internet. To support proxy services, where the interior router will let the internal hosts send some protocols as long as they are talking to the bastion host, the exterior router could let those protocols through as long as they are coming from the bastion host. These rules are desirable for an extra level of security, but they're theoretically blocking only packets that can't exist because they've already been blocked by the interior router. If they do exist, either the interior router has failed, or somebody has connected an unexpected host to the perimeter network.
So, what does the exterior router actually need to do? One of the security tasks that the exterior router can usefully perform--a task that usually can't easily be done anywhere else--is the blocking of any incoming packets from the Internet that have forged source addresses. Such packets claim to have come from within the internal network, but actually are coming in from the Internet.
The interior router could do this, but it can't tell if packets that claim to be from the perimeter net are forged. While the perimeter net shouldn't have anything fully trusted on it, it's still going to be more trusted than the external universe; being able to forge packets from it will give an attacker most of the benefits of compromising the bastion host. The exterior router is at a clearer boundary. The interior router also can't protect the systems on the perimeter net against forged packets. (We discuss forged packets in greater detail in Chapter 6 of our book.)
Variations on firewall architectures
We've shown the most common firewall architectures in the diagrams above. However, there is a lot of variation in architectures. There is a good deal of flexibility in how you can configure and combine firewall components to best suit your hardware, your budget, and your security policy. For a description of some common variations, and their benefits and drawbacks, see Chapter 4 in our book.
The assumption in most of the discussions in this book is that you are building a firewall to protect your internal network from the Internet. However, in some situations, you may also be protecting parts of your internal network from other parts. There are a number of reasons why you might want to do this:
This is another situation where firewalls are a useful technology. In some cases, you will want to build internal firewalls; that is, firewalls that sit between two parts of the same organization, or between two separate organizations that share a network, rather than between a single organization and the Internet.
It often makes sense to keep one part of your organization separate from another. Not everyone in an organization needs the same services or information, and security is frequently more important in some parts of an organization (the accounting department, for example) than in others.
Many of the same tools and techniques you use to build Internet firewalls are also useful for building these internal firewalls. However, there are some special considerations that you will need to keep in mind if you are building an internal firewall.
Laboratory and test networks are often the first networks that people consider separating from the rest of an organization via a firewall (usually as the result of some horrible experience where something escapes the laboratory and runs amok). Unless people are working on routers, this type of firewall can be quite simple. Neither a perimeter net nor a bastion host is needed, because there is no worry about snooping (all users are internal anyway), and you don't need to provide many services (the machines are not people's home machines). In most cases, you'll want a packet filtering router that allows any connection inbound to the test network, but only known safe connections from it. (What's safe will depend on what the test network is playing with, rather than on the normal security considerations.)
In a few cases (for example, if you are testing bandwidth on the network), you may want to protect the test network from outside traffic that would invalidate tests, in which case you'll deny inbound connections and allow outbound connections.
If you are testing routers, it's probably wisest to use an entirely disconnected network; if you don't do this, then at least prevent the firewall router from listening to routing updates from the test network. You can do this a number of ways, depending on your network setup, what you're testing, and what routers you have available. You might do any of the following:
If you have a number of test networks, you may find it best to set up a perimeter net for them and give each one a separate router onto the perimeter net, putting most of the packet filtering in the router between the perimeter and the main network. That way, if one test network crashes its router, the rest still have their normal connectivity. The diagram below shows this architecture.
Test networks are dangerous, but not necessarily less secure than other networks. Many organizations also have some networks that are intrinsically less secure than most. For example, a university may consider networks that run through student dormitories to be particularly insecure; a company may consider demonstration networks, porting labs, and customer training networks to be particularly insecure. Nevertheless, these insecure networks need more interaction with the rest of the organization than does a purely external network.
Networks like dormitory networks and porting labs, where external people have prolonged access and the ability to bring in their own tools, are really as insecure as completely external networks and should be treated that way. Either position them as a second external connection (a new connection on your exterior router or a new exterior router) or set up a separate perimeter network for them. The only advantage these networks offer over purely external networks is that you can specify particular software to be run on them, which means you can make use of encryption effectively. (See Chapter 10 of our book for a discussion of how to provide services to external, untrusted networks.)
Demonstration and training labs, where external people have relatively brief, supervised access and cannot bring in tools, can be more trusted (as long as you are sure that people really do have relatively brief, supervised access and cannot bring in tools!). You still need to use a packet filtering router or a dual-homed host to prevent confidential traffic from flowing across those networks. You will also want to limit those networks to connections to servers you consider secure. However, you may be willing to provide NFS service from particular servers, for example, which you wouldn't do to a purely untrusted network. One of your main concerns should be preventing your trusted users from doing unsafe things while working on those networks (for example, logging in to the machines on their desks and forgetting to log out again, or reading confidential electronic mail). This should be done with a combination of training and force (ensuring that the most insecure uses fail).
This is a place where a dual-homed host can be quite useful, even with no proxies on it; the number of people who need to use the host is probably small, and having to log into it will ensure that they see warning messages. The host will also be unable to provide some tempting but highly insecure services; for example, you won't be able to run NFS except from the dual-homed host, and people won't be able to mount their home machine's filesystems.
Just as most organizations have points where they're particularly insecure, most of them have points where they're particularly security-conscious. At universities, these may be particular research projects, or the registrar's office; at commercial companies, these may be new products under development; at almost any place, the accounting and finance machines need extra protection. Some unclassified government work also requires extra protections.
Networks for doing classified work--at any level of classification--not only need to be more secure, but also need to meet all relevant government regulations. Generally speaking, they will have to be separated from unclassified networks. In any case, they are outside of the scope of this book. If you need to set one up, consult your security officer; traditional firewalls will not meet the requirements. (If you don't have a security officer, you're not going to have a classified network, either.)
You can choose to meet your requirements for extra security either by encrypting traffic that passes over your regular internal networks, or by setting up separate networks for the secure traffic. Separate networks are technically easier as long as there are separate machines on them. That is, if you have a secure research project that owns particular computers, and if people log into them to work on that project, it's reasonably simple to set up a straightforward single-machine firewall (a packet filtering router, most likely). That firewall will treat your normal network as the insecure external universe. Because the lab machines probably don't need many services, a bastion host is unnecessary, and a perimeter net is needed only for the most secret ventures.
If you are dealing with people whose day-to-day work is secure, and who don't have separate machines for that work, a separate network becomes harder to implement. If you put their machines onto a more secure network, they can't work easily with everybody else at the site, and they need a number of services. In this case, you'll need a full bastion host, and therefore probably a perimeter net to put it on. It's tempting to connect their machines to two networks -- the secure net and the insecure net -- so they can transmit confidential data over one and participate with the rest of the site on the other, but this is a configuration nightmare. If they're attached to both at once, each host is basically a dual-homed host firewall, with all the attendant maintenance problems. If they can only be attached to one at a time, things are more secure. However, configuring the machines is unpleasant for you, and moving back and forth is unpleasant for the user.
At a university, which tends not to have a single coherent network to start with, putting the registrar's office and the financial people on secure networks, firewalled from the rest of the university, will probably work. At a company or government office, where most people work in the same environment, look into using encryption in your applications instead.
Joint venture firewalls
Sometimes, organizations come together for certain limited reasons, such as a joint project; they need to be able to share machines, data, and other resources for the duration of the project. For example, look at the decision of IBM and Apple to collaborate on the PowerPC, a personal computer that runs a common operating system; undertaking one joint project doesn't mean that IBM and Apple have decided to merge their organizations or to open up all their operations to each other.
Although the two parties have decided to trust each other for the purposes of this project, they are still competitors. They want to protect most of their systems and information from each other. It isn't just that they may distrust each other; it's also that they can't be sure how good the other's security is. They don't want to risk that an intruder into their partner's system might, through this joint venture, find a route into their system as well. This security problem occurs even if the collaborators aren't also competitors.
You may also want to connect to an external company because it is an outside vendor to you. A number of services depend on information transfer, from shipping (you tell them what you want to ship; they tell you what happened to your shipment) to architecture (you give them specifications; they give you designs) to chip fabrication (you send them the chip design, they give you status on the fabrication process). These outside vendors are not competitors in any sense, but they frequently also work for competitors of yours. They are probably aware of confidentiality issues and try to protect the information they are supposed to have, to the best of their ability. On the other hand, if there are routing slip-ups, and data you're not explicitly sending to them crosses their networks, they are probably going to be completely unconscious of it, and the data will be at risk.
This may seem far-fetched, but it turns out to be a fairly routine occurrence. One company was mystified to discover routes on its network for a competitor's internal network, and still more baffled to discover traffic using these routes. It turned out that the shortest route between them and their competitor was through a common outside vendor. The traffic was not confidential, because it was all traffic that would have gone through the Internet. On the other hand, the connection to the outside vendor was not treated as if it were an Internet connection (the outside vendor itself was not Internet-connected, and nobody had considered the possibility of it cross-connecting Internet-connected clients). Both companies had sudden, unexpected, and unprotected vulnerabilities.
An internal firewall limits exposure in such a situation. It provides a mechanism for sharing some resources, while protecting most of them. Before you set out to build an internal firewall, be sure you're clear on what you want to share, protect, and accomplish. Ask these questions:
An `arms-length relationship': shared-perimeter networks
Shared perimeter networks are a good way to approach joint networks. Each party can install its own router, under its own control, onto a perimeter net between the two organizations. In some configurations, these two routers might be the only machines on the perimeter net, with no bastion host. If this is the case, then the "net" might simply be a high-speed serial line (such as a 56-kilobit-per-second line or T1/E1 line) between the two routers, rather than an Ethernet or another type of local area network.
This is highly desirable with an outside vendor. Most of them are not networking wizards, and they may attempt to economize by connecting multiple clients to the same perimeter network. If the perimeter net is an Ethernet or something similar, any client that can get to its router on that perimeter network can see the traffic for all the clients on that perimeter network--which, with some providers, is almost guaranteed to be confidential information belonging to a competitor. Using a point-to-point connection as the "perimeter net" between the outside vendor and each client, rather than a shared multiclient perimeter net, will prevent them from doing this, even accidentally.
Do you need bastion hosts?
You might not actually need to place a bastion host on the perimeter network between two organizations. The decision about whether you need a bastion host depends on what services are required for your firewall and how much each organization trusts the other. Bastion hosts on the perimeter net are rarely required for relationships with outside vendors; usually you are sending data over one particular protocol and can adequately protect that as a screened host.
If the organizations have a reasonable amount of trust in each other (and, by extension, in each other's security), it may be reasonable to establish the packet filters so that clients on the other side can connect to internal servers (such as SMTP and DNS servers) directly.
On the other hand, if the organizations distrust each other, they might each want to place their own bastion host, under their own control and management, on the perimeter net. Traffic would flow from one party's internal systems, to their bastion host, to the other party's bastion host, and finally to the other party's internal systems.
What the future holds
Systems that might be called "third-generation firewalls" -- firewalls that combine the features and capabilities of packet filtering and proxy systems into something more than both -- are just starting to become available.
More and more client and server applications are coming with native support for proxied environments. For example, many WWW clients include proxy capabilities, and lots of systems are coming with run-time or compile-time support for generic proxy systems such as the SOCKS package.
Packet filtering systems continue to grow more flexible and gain new capabilities, such as dynamic packet filtering. With dynamic packet filtering, such as that provided by the CheckPoint Firewall-1 product, the Morning Star Secure Connect router, and the KarlBridge/KarlBrouter, the packet filtering rules are modified "on the fly" by the router in response to certain triggers. For example, an outgoing UDP packet might cause the creation of a temporary rule to allow a corresponding, answering UDP packet back in.
The first systems that might be called "third generation" are just starting to appear on the market. For example, the Borderware product from Border Network Technologies and the Gauntlet 3.0 product from Trusted Information Systems look like proxy systems from the external side (all requests appear to come from a single host), but look like packet filtering systems from the inside (internal hosts and users think they're talking directly to the external systems). They accomplish this magic through a generous amount of internal bookkeeping on currently active connections and through wholesale packet rewriting to preserve the relevant illusions to both sides. The KarlBridge/KarlBrouter product extends packet filtering in other directions, providing extensions for authentication and filtering at the application level. (This is much more precise than the filtering possible with traditional packet filtering routers.)
While firewall technologies are changing, so are the underlying technologies of the Internet, and these changes will require corresponding changes in firewalls.
The underlying protocol of the Internet, IP, is currently undergoing major revisions, partly to address the limitations imposed by the use of four-byte host addresses in the current version of the protocol (which is version 4; the existing IP is sometimes called IPv4), and the blocks in which they're given out. Basically, the Internet has been so successful and become so popular that four bytes simply isn't a big enough number to assign a unique address to every host that will join the Internet over the next few years, particularly because addresses must be given out to organizations in relatively large blocks.
Attempts to solve the address size limitations by giving out smaller blocks of addresses (so that a greater percentage of them are actually used) raise problems with routing protocols. Stop-gap solutions to both problems are being applied but won't last forever. Estimates for when the Internet will run out of new addresses to assign vary, but the consensus is that either address space or routing table space (if not both) will be exhausted sometime within a few years after the turn of the century.
While they're working "under the hood" to solve the address size limitations, the people designing the new IP protocol (which is often referred to as "IPng" for "IP next generation"--officially, it will be IP version 6, or IPv6, when the standards are formally adopted and ratified) are taking advantage of the opportunity to make other improvements in the protocol. Some of these improvements have the potential to cause profound changes in how firewalls are constructed and operated; however, it's far too soon to say exactly what the impact will be. It will probably be at least 1997, if not later, before IPng becomes a significant factor for any but the most "bleeding edge" organizations on the Internet. (Chapter 6 of our book describes IPv6 in somewhat more detail.)
The underlying network technologies are also changing. Currently, most networks involving more than two machines (i.e., almost anything other than dial-up or leased lines) are susceptible to snooping; any node on the network can see at least some traffic that it's not supposed to be a party to. Newer network technologies, such as frame relay and Asynchronous Transfer Mode (ATM), pass packets directly from source to destination, without exposing them to snooping by other nodes in the network.
subscribe firewalls-digestin the body of an e-mail message (NOT in the "Subject:" line!) to email@example.com. See the description of the mailing list
If you have technical problems with this magazine, contact firstname.lastname@example.org
In general, sites should obtain and use IP addresses that have been assigned specifically to them by either their service provider or their country's Network Information Center (NIC). This coordinated assignment of addresses will prevent sites from having difficulties reaching other sites because they've inadvertently chosen conflicting IP addresses. Coordinated assignment of addresses also makes life easier (and therefore more efficient) for service providers and other members of the Internet routing core.
Unfortunately, some organizations have simply picked IP addresses out of thin air, because they didn't want to go to the trouble of getting assigned IP addresses, because they couldn't get as many addresses as they thought they needed for their purposes (Class A nets are extremely difficult to come by because there are only 126 possible Class A network numbers in the world), or because they thought their network would never be connected to the Internet. The problem is, if such organizations ever do want to communicate with whoever really owns those addresses (via a direct connection, or through the Internet), they'll be unable to because of addressing conflicts.
RFC 1597 and RFC 1627
RFC (Requests for Comments) 1597 is an Internet standards document that recognizes this long-standing practice of assigning IP addresses for internal use and sets aside certain IP addresses (Class A net 10, Class B nets 172.16 through 172.31, and Class C nets 192.168.0 through 192.168.255) for private use by any organization. These addresses will never be officially assigned to anyone and should never be used outside an organization's own network.
As RFC 1627 (a followup to RFC 1597) points out, RFC 1597 doesn't really address the problem; it merely codifies the problem so that it can be more easily recognized in the future. If a site chooses to use these private addresses, they're going to have problems if they ever want to link their site to the Internet (all their connections will have to be proxied, because the private addresses must never leak onto the Internet), or if they ever want to link their site to another site that's also using private addresses (for example, because they've bought or been bought by such a site).
Our recommendation is to obtain and use registered IP addresses if at all possible. If you must use private IP addresses, then use the ones specified by RFC 1597, but beware that you're setting yourself up for later problems. We use the RFC 1597 addresses throughout this book as sample IP addresses, because we know they won't conflict with any site's actual Internet-visible IP addresses.
You may know some of the firewall terms listed below, and some may be new to you. Some may seem familiar, but they may be used in a way that is slightly different from what you're accustomed to (though we try to use terms that are as standard as possible). Unfortunately, there is no completely consistent terminology for firewall architectures and components. Different people use terms in different--or, worse still, conflicting--ways. Also, these same terms sometimes have other meanings in other networking fields; the basic definitions below are for a firewalls context.
(Marcus Ranum, who is generally held responsible for the popularity of this term in the firewalls professional community, says, "Bastions...overlook critical areas of defense, usually having stronger walls, room for extra troops, and the occasional useful tub of boiling hot oil for discouraging attackers.")
(Some networking literature -- in particular, the BSD Unix release from Berkeley -- uses the term "packet filtering" to refer to something else entirely: Selecting certain packets off a network for analysis, as is done by the etherfind or tcpdump programs).
About the author
D. Brent Chapman (email@example.com) is a consultant in the San Francisco Bay Area, specializing in Internet firewalls. He has designed and built Internet firewall systems for a wide range of clients, using a variety of techniques and technologies. He is also the manager of the Firewalls Internet mailing list. Reach D. at firstname.lastname@example.org.