Click on our Sponsors to help Support SunWorld
Sysadmin by Hal Stern

Hardening a Unix computer for Internet use

How to ready TCP/IP to repel the unwanted and better serve the friendly.

SunWorld
December  1995
[Next story]
[Table of Contents]
[Search]
Subscribe to SunWorld, it's free!

Abstract
Fewer things strike more terror in the heart of system administrators than connecting a machine directly to the Internet. How can the performance of TCP/IP connections be improved? How do I keep vandals out and users happy? Read on. (3,200 words)


Mail this
article to
a friend
Rob Kolstad, long-time USENIX executive and noted industry personality, often points out that a good system administrator is a master of change on many time scales. That statement is most appropriate in the context of last month's topic, managing TCP/IP connections. New services appear monthly, new host entries pop up daily or every few hours, and you can get hit with a routing table update at least every 30 seconds. The number of moving parts conspires to keep at least some things broken, some of the time. Last month we talked about the myriad configuration problems that interfere early in the process of making a connection. This month, we'll proceed with our discussion of TCP/IP mechanics by covering performance and security concerns.

After identifying the remote system's IP address and desired service port number, the rest of the connection process should be as simple as knocking on a door when you know the street address and floor number on which it's located. Nothing in life, electronic or carbon, is that simple. The door-knocking analogy highlights most of the additional things that can go wrong in TCP-land: nobody answers the remote door (performance problem), you are deemed persona non grata and are turned away at your destination, or you run into troubles getting out of your own building. To start our journey, we'll look at server performance limits that create connection bottlenecks, and then explore the popular TCP wrapper package used to establish access controls over network services. We'll conclude with an overview of the SOCKS tools that let you enjoy the security of a well-locked door but still sneak out for an occasional network snack.


Advertisements

Connection erection
Just because you can name the remote end of a socket with an IP address and port number pair doesn't mean the other side can or even wants to talk to you. Making yourself appear interesting (and trusted) is a security problem we'll cover shortly. Making sure your servers have sufficient connection management resources is a growing performance problem. As the use of network services has exploded, many years-old assumptions about resource allocation have proven far too restrictive.

A server-side process prepares to accept socket connections by first calling listen() and then accept(). The first call determines the depth of the incoming connection queue, while the second call is what actually puts the socket into a receive-ready state. In the days of pre-Internet boom the default value of five pending connections was frequently hard-coded in the implementation of listen(). Current socket interface code, however, interprets the argument and sets the queue depth. When the socket in question is owned by httpd, or any other process that receives a high volume of connection requests, the queue depth is a critical performance limit.

An embryonic socket connection goes through a three-way handshake between client and server. The connection stays on the incoming connection queue until the handshake has been completed. Knowing the steps involved will help you determine just how long the average connection dance will take.

The connection remains in the queue for the duration of the last two packet exchanges, or the total of the round-trip network transfer time between client and server, plus the time required for the client to process the server's initial packet.

Once the connection queue is full, further attempts to connect to the socket are discarded. If you find connections are refused, or if your browser is complaining that it can't open a URL because the server isn't responding, you're probably bumping into the backlog limit.

Using a bit of queuing theory, we can determine the maximum connection request rate (RR) knowing the average round-trip time (RT) and the connection queue depth (QD): RR = QD/RT. If the depth is left at its default value of 5, and it takes about 200 msec to complete a round-trip, you can handle 25 connections/second. Increase the latency for a handshake over a series of wide-area links to 500 msec, and that rate drops to 10 connections/second. Crank the queue depth up to 32, however, and you can handle 64 connections/second at 500 msec round-trip, and a more respectable 160/second at 200 msec.

Here's another way to calculate the expected depth of your socket connection waiting line. Starting with the RR = QD/RT relationship, multiply both sides by RT, yielding QD = RR * RT. The average connection backlog will be the connection arrival rate (expressed in connections/second) multiplied by the average round-trip time (in seconds) for a three-way handshake. A site bombarded by 100 connection requests/second from local machines, where the round-trip service time sits near 30 msec, will only have a backlog of (0.03 second * 100) = 3 connection requests. Accept that same load from the Internet, where the handshake round trip time is more 300 msec on a good day, and the queue depth increases to 30.

There are two steps required to raise the connection backlog limit. First, change your server-side code so that listen() is passed a more accurate depth parameter. Second, inform the kernel of the larger backlog high-water mark. In Solaris 2.4, do this using ndd:

luey# ndd -set /dev/tcp tcp_conn_req_max 32

The default value is still only 5. This command should be placed in /etc/init.d/S69inet, or executed by a boot script before httpd is started, or you'll be clamped at the too-small default. You can increase the backlog up to 32 in Solaris 2.4, and Solaris 2.5 further increases the upper bound to 1,024 connections. (Thanks to Bob Gilligan of Sun's Internet engineering team for the math and explanation of the connection request mechanics).

Be safe and wrap it
Once the configuration and performance issues are under control, you should enjoy a flurry of free-flowing network connections. On your organization's internal networks, this may be enough to make you a hero. If you're connected to the outside world, or if you don't trust all of the players on your own networks, this is enough to cause the security-conscious to scream.

Note that the same rules apply internally and over the Internet. Keeping the bad guys out of your internal network is fundamentally the same problem as containing the frisky marketing types to their own printer when they're in color-transparency mating season. The net net net bottom line is that it's easy for service connections to be made by an unauthorized or unwanted user.

Before we get into the tools, a few words of warning are in order. The issues highlighted in this month's column are covered in gory detail in books dedicated to building and designing firewall systems. Cheswick and Bellovin Firewalls and Internet Security By Cheswick and Bellovin and Building Internet Firewalls by D. Brent Chapman and Elizabeth Zwicky plumb the depths of theory and practice quite well. We aren't going to treat the problems thoroughly, or delve into policy and operational matters. Take the time to develop a well-engineered solution, calling on expert help to supplement your in-house skills before putting your business or reputation on the wire. For starters, consider reading the firewalls mailing list maintained by Brent Chapman (For starters, check out the firewalls mailing list maintained by Brent Chapman.)

The logical way to make your house more secure is to add a lock and peephole to the front door. Apply the same logic to your network servers by adding a front-end that inspects incoming connections and locks out those that haven't been authorized. Do you recognize the IP address on the other end? Is someone trying to open every port on your machine, looking for holes? Can you log all access attempts to help identify possible attacks?

One of the more popular packages to inspect and log incoming connections is Wietse Venema's TCP wrapper. Venema is the co-author, with Dan Farmer, of the SATAN assessment tool. TCP wrapper works in conjunction with inetd. Normally, inetd listens for requests on well-known service ports, spawning a new daemon for each completed connection. The /etc/inetd.conf configuration file indicates the daemon to be run for each service:

ftp	stream	tcp	nowait	root	/usr/sbin/in.ftpd	in.ftpd

The major problem with inetd is that it has the morals of an alley cat, and will happily accept connections from anyone who reaches it. Enter the TCP wrapper daemon, tcpd, sitting between inetd and the appropriate daemon. Each service in /etc/inetd.conf names it as the daemon to get invoked for a service, using the real daemon's name as an argument to the wrapper:

ftp	stream	tcp	nowait	root	/usr/local/bin/tcpd in.ftpd

When tcpd is executed, it checks a permission file called /etc/hosts.allow. Authorization information for remote hosts is itemized in this file, including directives to explicitly accept some hosts or networks, deny any host or network addresses, and log various activities. If the remote host is an electronically undesirable network partner, the connection request is dropped.

A similar package called netacl (Network Access Control List) is part of the Trusted Information Systems firewall toolkit (FWTK) A log monitoring tool well-versed in TCP wrapper output is available at Auburn University. Venema also authored a wrappered version of portmap, the daemon that hands out TCP/IP port numbers corresponding to RPC program numbers. Armed with the basic building blocks, you're ready to implement authorization and access-control policies.

Policies of firmness
Current literature on firewall and network security roughly divides policies into two classes: those that specifically deny some services, allowing anything else by default, and those that specifically allow services and deny connections by default. While the latter policy camp is much more restrictive, it also tends to limit the number of headaches you will have to deal with. Being firm and denying services by default means you're in for fewer surprises from unexpected holes in previously unused services.

Consider the task of protecting a home-grown application that you want to make accessible across firewall or company boundaries. The TCP wrapper package protects services owned by inetd and the modified portmapper covers RPC based applications such as NIS and various license managers. However, services managed by daemons started at boot time, outside of portmap or inetd control, are not protected by either wrapper. You'll need to enforce access controls at your router, using a low-level packet filter, or modify the application's installation so it can be managed by inetd. Avoid retooling applications to have them perform network authorization -- you're likely to end up with inconsistent or incomplete implementations, leaving you open to a host of attacks on your host.

If you are going to use wrapper services to restrict service access inside your organization, the problem of network security extends beyond the protected server. Let's say you decide to configure a TCP wrapper that allows any connection from a machine on the "inside" of your network, assuming all employees are well-intentioned and to be trusted. What you can't trust, however, are the packets coming through your router or Internet gateway. If an attacker hand-crafts a connection request packet with an IP address that appears to be inside your network, it's possible that the TCP wrapper will happily accept the connection. This problem, known as IP spoofing, must be dealt with at the boundary between the internal and external (Internet) networks. Your router, gateway or firewall should discard packets that appear on the external network connection but proclaim to be from the inside, using a forged IP address. More information on IP spoofing and how it was used by Internet rogues like Kevin Mitnick can be found on the Information Works! publications list.

Keep your SOCKS on
So far, we've concentrated on keeping the unwanted characters out through careful inspection of their source addresses. We've assumed a fair bit of transparency through your connection to the Internet, with host-level security taking the spotlight. What if your gateway or Internet firewall doesn't forward IP packets? Most hosts that straddle "inside" and "outside" networks do not automatically route IP packets, whether outside is the Internet proper or simply an untrusted stretch of data highway. In a purely perimeter-oriented defense, turning off IP forwarding helps to keep the bad guys out. It also keeps the good guys in unless you create proxy, or relay applications on the gateway that connect through your locked door to the outside world.

Of course, there's another publicly available package to solve this problem: SOCKS, a name that is derived more as a contraction of "sockets" than as an acronym. Learn more about the package's history and availability on the SOCKS Web page. Using SOCKS, connections from the inside are relayed to the outside network, with only minor modifications to the application to make it conscious of the relay. Changing application code is a small price to pay for user-level transparency; users won't have to contend with explicitly talking to a proxy instead of a familiar command line.

Socks consists of two components: a daemon that runs on your gateway host and a library used to build applications to talk to that daemon. The SOCKS daemon listens for connections emanating inside the firewall, and relays them to the outside. The library is consulted in place of socket set-up calls such as bind() and connect(), causing them to talk to the daemon instead of the actual exterior network service. These routines have an R prefix, so the SOCKS version of bind() is called Rbind() and the modified connect() is Rconnect(). In addition to these two calls, accept(), listen(), getsockname(), and select() are overridden.

Rebuilding an application to understand the SOCKS relay is known as "SOCKSifying" the client. The simplest approach, which involves no source-code changes, is to modify the Makefile to redefine the necessary library functions as macros, substituting the SOCKS client library name as the macro's value:

-Dconnect=Rconnect -Dbind=Rbind

Wherever connect(args) appears in the application code, it will be replaced with Rconnect(args). If the Makefile trick results in bizarre compilation and linker errors, you'll have to manually modify the client to use the SOCKS library routines. Of course, if your code is completely dynamically linked (a topic we'll visit in coming months), you can build SOCKS as a shared library and have the dynamic linker do the dirty work.

Socks isn't a panacea, since most environments will have many non-Unix clients and servers. There are SOCKS libraries available for Macintosh and Windows clients, although the SOCKS daemon must run on a Unix host. A client using SOCKS-compatible applications contains an /etc/socks.conf configuration file that points to available SOCKS relay hosts. This bit of client-side work makes SOCKS non-trivial to install on thousands of hosts. Another downside to the tool is that it only works for TCP-based services; you'll need the companion package udprelay to handle connectionless services. More information is contained in both books on firewalls mentioned above, as well as in the installation and configuration notes that come with SOCKS.

Behind the green door
The tools discussed so far impart access controls on your network. These user restrictions are similar to file permissions, but with a twist: once someone has opened a connection, any other interested party on the network can snoop on the data transfer. Access controls only handle the connection part of the problem -- you need to worry about the entire lifetime of the session, including any possible data exposure. Consider this example: one of your users ftps through the firewall to a public-access machine on which he has an account. He drops off a file with some work in progress, hoping to work on it later that night from home. Since his home directory on that machine has no read or write permission for anyone but himself, and the machine is well administered, he feels comfortable leaving proprietary information there for short periods of time.

Unfortunately, the data was exposed to any watching eyes on at least one remote network as soon as the ftp connection was in progress. Next month, we'll cover some additional security tools to create secure login sessions and to encrypt files for safer transport over unknown and untrusted channels. Between now and then, here are some policy areas to ponder while promising security and privacy behind your own locked door:

A simple but effective set of policies, coupled with the right tools and good implementation will let you sleep better at night, without having to personally answer every knock on your network doors.


Click on our Sponsors to help Support SunWorld


Resources


What did you think of this article?
-Very worth reading
-Worth reading
-Not worth reading
-Too long
-Just right
-Too short
-Too technical
-Just right
-Not technical enough
 
 
 
    

SunWorld
[Table of Contents]
Subscribe to SunWorld, it's free!
[Search]
Feedback
[Next story]
Sun's Site

[(c) Copyright  Web Publishing Inc., and IDG Communication company]

If you have technical problems with this magazine, contact webmaster@sunworld.com

URL: http://www.sunworld.com/swol-12-1995/swol-12-sysadmin.html
Last modified: