Who Is at the Door: The SYN Denial of Service

How to survive the SYN attack on a TCP/IP protocol weakness.

Over the past few months, a denial of service attack, known as the “SYN Attack”, has become notorious. This attack can prevent access to your mail, WWW and other critical servers. The attack was first described in a paper by Robert Morris in 1985 and received little attention. It wasn't until 2600 magazine published source code to exploit this weakness in popular implementations of the TCP/IP protocol stack that this weakness grabbed the attention of Internet Service Providers. One provider, Public Access Networks Corporation of New York City, was attacked repeatedly last September, causing its mail and web servers to be unavailable to its users for extended periods of time. In this article we explain what SYN really is, why it's needed in TCP/IP, why the attack works and how to prevent it.

Introduction

The Internet works as well as it does because its data communication protocols (IP, TCP and UDP) evolved over a decade through major revisions and trial-and-error “adjustments”. As a result, the protocols have developed a legendary robustness that makes them difficult to defeat; however, these protocols were designed with the basic assumption that all network administrators can be trusted. Unfortunately, this is not true in today's Internet environment. Given the right kind of knowledge, virtually any PC can be configured so that a malicious individual, acting as a system or network administrator, can bring down any number of servers on the Internet.

One of these vulnerabilities is called the “SYN” (synchronous) attack, and it can affect anyone who places a server on the Internet. The SYN attack is a denial of service attack, blocking others from connecting to your server.

Network Layers

The Internet protocol stack utilizes three primary layers of the OSI model. The lowest layer is the physical layer, and it contains the physical wires, network host adapter(s) and adapter device driver(s). The next layer is the data link layer, whose job is to read a stream of bits off the network and assemble them into frames for the next higher layer.

The Internet Protocol (IP) or network layer is the next layer. It examines the incoming frames to determine if they are IP packets and, if not, it passes the frame onto another protocol stack (e.g., Novell) or discards the frame as nonsense. If it is an IP packet, the packet contents are further evaluated by the IP layer for a number of IP related activities such as Address Resolution Protocol (ARP) or Internet Control and Message Protocol (ICMP), which the connectionless ping and traceroute applications employ.

If the packet is not one of the above formats, its content continues to be evaluated as a Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) packet. If the packet contains a TCP header, it is posted to the next higher TCP layer. The verb “posted” is significant in that the packet is moved to another place for processing, and that processing will occur sometime in the future. In other words, it is at the IP-TCP boundary where information, driven by interrupts, “bubbles up” from the environment; it is at the IP-TCP boundary where information waits for processing based upon requests from programs that wish to communicate with the network. Therefore, the IP-TCP boundary contains a fixed amount of memory buffers allocated to network “activity” without the system really knowing what that activity is. It is at this boundary that the SYN attack works.

SYN Protocol by Analogy

Before discussing the third Internet layer and how TCP establishes a connection, perhaps it is better to begin with an analogy that illustrates a typical network problem and how TCP overcomes the problem in its daily routine.

Our analogy begins on a college campus with a studious student (SS) who has the misfortune of being placed in a “party” dorm. On a typical evening, SS is studying at his desk trying to master some dry material on data link protocols for his computer networks class. Someone knocks at his door. Upon opening the door, he gets hit with a water balloon from his rowdy neighbors. Using the material from his network class, SS comes up with a solution to stop his pesky neighbors, yet still greet his invited visitors.

He decides on a “secret knock”—his friends announce themselves with a one to five knock code. SS hears the knock and goes to the door; however, he does not open it. Instead, he repeats the original knock and adds his own one to five knock code. Now the visitor knocks the next “sequence” of his code and repeats SS's knocks.

These knocking gymnastics are referred to as a three-way handshake (see Figure 1) in data communications lingo, and solve three common network problems. First, they allow two hosts to establish starting “sequence” numbers which are used by the receiver to re-order packets or reassemble datagrams. Second, they enable the host to identify duplicate packets that occur from re-transmissions which, in turn, are a result of delayed responses. Finally, if either computer were to initiate a connection with a third computer at the same time, then two orderly connections could result, without confusion.

Figure 1. The 3-Way Handshake

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState