T/TCP: TCP for Transactions
In order to investigate the benefits of this implementation of T/TCP, it is important to test its operation, and also to compare its operation to the original TCP/IP operation. I performed these tests using the Linux 2.0.32 kernel with T/TCP modifications and FreeBSD version 2.2.5, which already implements T/TCP.
To investigate the performance of T/TCP in comparison to the original TCP/IP, I compiled a number of executables that returned different-sized data to the client. The two hosts involved were elendil.ul.ie, running Linux, and devilwood.ece.ul.ie, running FreeBSD 2.2.5. The tests were performed for ten different response sizes in order to vary the number of segments required to return the full response. Each request was sent 50 times and the results were averaged. The maximum segment size in each case is 1460 bytes.
The metric used for performance evaluation was the average number of segments per transaction. I used Tcpdump to examine the packets exchanged. Note that Tcpdump is not entirely accurate, since during fast packet exchanges, it tends to drop some packets to keep up. This accounts for some discrepancies in the results.
Figure 3 shows the results from testing for the number of segments for T/TCP versus the number of segments for normal TCP/IP. It is immediately obvious that there is a savings of an average of five packets. These five packets are accounted for in the three-way handshake, and the packets sent to close a connection. Lost packets and retransmissions cause discrepancies in the path of the graph.
One interesting point about the average number of segments when using a TCP client and a T/TCP server is that there is still a saving of one segment. A normal TCP transaction requires nine segments, but because the server was using T/TCP, the FIN segment was piggybacked on the final data segment, reducing the number of segments by one. This demonstrates a reduction in segments, even if only one side is T/TCP-aware.
Figure 4 shows the percentage savings for the different packet sizes. The number of packets saved remains fairly constant, but because of the increase in the number of packets being exchanged, there is a decrease in the overall savings. This indicates that T/TCP is more beneficial to small data exchanges.
The main memory drain in the implementation is in the routing table. In Linux, for every computer with which the host comes into contact, an entry for the foreign host is made in the routing table. This applies to both a direct connection and to a multi-hop route. This routing table is accessed through the rtable structure. The implementation of T/TCP adds two new fields to this structure, CCrecv and CCsent.
The entire size of this structure is 56 bytes, which isn't a major memory hog on a small stand-alone host. On a busy server, though, where the host communicates with maybe thousands of other hosts an hour, it can be a major strain on memory. Linux has a mechanism where a route no longer in use can be removed from memory. A check is run periodically to clean out unused routes and those that have been idle for a time.
The problem here is the routing table holds the TAO cache. So anytime a route which contains the last CC value from a host is deleted, the local host has to re-initiate the three-way handshake with a CCnew segment.
The benefits of leaving the routing entries up permanently are clear. The most likely use would be in a situation where a host talks to only a certain set of foreign hosts and denies access to unknown hosts. In this case, it is advantageous to keep a permanent record in memory so the three-way handshake can be bypassed more often.
The original protocol specification (RFC1644) labeled T/TCP as an experimental protocol. Since the RFC was published, there hasn't been an update to the protocol to fix some of the problems. From the previous sections, the benefits over the original TCP protocol are obvious, but is it a case of the disadvantages outweighing the advantages?
One of the more serious problems with T/TCP is that it opens up the host to certain Denial of Service attacks. SYN flooding (see www.sun.ch/SunService/technology/bulletin/bulletin963.html for more information) is the term given to a form of denial-of-service attack where the attacker continually sends SYN packets to a host. The host creates a sock structure for each SYN, thus reducing the number of sock structures that can be made available to legitimate users. This may eventually result in the host crashing when enough memory has been used. SYN cookies were implemented in the Linux kernel to combat this attack. SYN cookies cause problems with T/TCP, as there are no TCP options sent in the cookie, and any data arriving in the initial SYN can't be used immediately. The CC option in T/TCP does provide some protection on its own, but it is not secure enough.
Another serious problem discovered during research was that attackers can bypass rlogin authentication. An attacker creates a packet with a false IP address in it, one that is known to the destination host. When the packet is sent, the CC options allow the packet to be accepted immediately and the data passed on. The destination host then sends a SYNACK to the original IP address. When this SYNACK arrives, the original host sends a reset, as it is not in a SYN-SENT state. This happens too late, as the command will already have been executed on the destination host. Any protocol that uses an IP address as authentication is open to this sort of attack. (See geek-girl.com/bugtraq/1998_2/0020.html.)
It should be noted, however, that the use of T/TCP in conjunction with protocols such as HTTP have fewer security problems, due to the inability to run any server commands with HTTP.
RFC1644 also has a duplicate transaction problem. This can be serious for applications that are non-idempotent (repeat transactions are very undesirable). This error can occur in T/TCP if a request is sent to a server and the server processes the transaction, but before it sends back an acknowledgment, the process crashes. The client side times out and retransmits the request; if the server process recovers in time, it can repeat the same transaction. This problem occurs because the data in a SYN can be immediately passed to the process, rather than in TCP where the three-way handshake has to be completed before data can be used. The use of two-phase commits and transaction logging can eliminate this problem.
|Natalie Rusk's Scratch Coding Cards (No Starch Press)||Feb 17, 2017|
|Own Your DNS Data||Feb 16, 2017|
|IGEL Universal Desktop Converter||Feb 15, 2017|
|Simple Server Hardening||Feb 14, 2017|
|Server Technology's HDOT Alt-Phase Switched POPS PDU||Feb 13, 2017|
|Bash Shell Script: Building a Better March Madness Bracket||Feb 09, 2017|
- Own Your DNS Data
- Simple Server Hardening
- Teradici's Cloud Access Platform: "Plug & Play" Cloud for the Enterprise
- Understanding Firewalld in Multi-Zone Configurations
- From vs. to + for Microsoft and Linux
- The Weather Outside Is Frightful (Or Is It?)
- Bash Shell Script: Building a Better March Madness Bracket
- Server Technology's HDOT Alt-Phase Switched POPS PDU
- IGEL Universal Desktop Converter
- Natalie Rusk's Scratch Coding Cards (No Starch Press)