T/TCP: TCP for Transactions
With the World Wide Web being the prime example of client-server transaction processing today, this section will focus on the benefits of T/TCP to the performance of the Web.
Currently, the HTTP protocol sits in the application layer of the TCP/IP reference model. It uses the TCP protocol to carry out all its operations, UDP being too unreliable. Much latency is involved in the transfer of information, i.e., the three-way handshake and explicit shutdown exchanges.
In a survey of 2.6 million web documents searched by the Inktomi web crawler search engine (see http://inktomi.berkeley.edu/), it was found that the mean document size on the Web was 4.4KB, the median size was 2.0KB and the maximum size encountered was 1.6MB.
Referring to Figure 4, it can be seen that the lower the segment size, the better the performance of T/TCP over normal TCP/IP. With a mean document size of 4.4KB, this results in an average savings of just over 55% in the number of packets. When taking the median size into account, there is a savings of approximately 60%.
At the moment, all web pages are transferred in plaintext form, requiring little work from either the server side or the client side to display the pages.
In their paper “Network Performance Effects of HTTP/1.1, CSSI and PNG” (see Resources), the authors investigated the effect of introducing compression to the HTTP protocol. They found that compression resulted in a 64% savings in the speed of downloading, with a 68% decrease in the number of packets required. Over normal TCP/IP, this brings the packet exchanges and size of data down to the level where T/TCP becomes beneficial. Thus, a strategy involving both compression and T/TCP can result in enormous savings in time and bandwidth.
In this situation, a delta refers to the difference between two files. On UNIX systems, the diff command can be used to generate the delta between two files. Using the changed file and the delta, the original file can be regenerated again and vice versa.
J. C. Mogul, et al. (see Resources) investigated the effect that delta encoding has on the Web. In their testing, they not only used delta encoding, they also compressed the delta generated to further reduce the amount of information transferred. They discovered that by using the vdelta delta generator and compression, they could achieve up to 83% savings in the transmission of data.
If this method was used with T/TCP, there could be as much as a further 66% savings in packets transferred. This is a total of 94% reduction in packet transfer.
It should be noted, however, that this is a best-case scenario. In this situation, the document will already have been cached on both the server and the client side, and the client and server will previously have completed the three-way handshake in order to facilitate the TAO tests.
Programming for T/TCP is slightly different using socket programming. As an example, the chain of system calls to implement a TCP client would be as follows:
socket: create a socket.
connect: connect to the remote host.
write: write data to the remote host.
shutdown: close one side of the connection.
Whereas with T/TCP, the chain of commands would be:
socket: create a socket.
sendto: connect, send data and close connection. The sendto function must be able to use a new flag MSG_EOF to indicate to the kernel that it has no more data to send on this connection.
Analysis of T/TCP shows that it benefits small, transaction-oriented transfers more than large-scale information transfers. Aspects of transactions can be seen in such cases as the World Wide Web, Remote Procedure Calls and DNS. These applications can benefit from the use of T/TCP in efficiency and speed. T/TCP reduces on average both the numbers of segments involved in a transaction and the time taken to complete the transaction.
As T/TCP is still an experimental protocol, there are problems that need to be addressed. Security problems encountered include the vulnerability to SYN flood attacks and rlogin authentication bypassing. Operational problems include the possibility of duplicate transactions occurring. Problems that occur less frequently would be the wrapping of the CC values on high-speed connections, thus opening up a destination host to accepting segments on the wrong connection.
Many people recognize the need for a protocol that favors transaction-style processing and are willing to accept T/TCP as the answer. Security considerations lead to the conclusion that T/TCP would be more useful in a controlled environment, one where there is little danger from a would-be attacker who can exploit the weaknesses of the standard. Examples of enclosed environments would be company intranets and networks protected by firewalls. With many companies seeing the Web as the future of doing business, internal and external, a system employing T/TCP and some of the improvements to HTTP, such as compression and delta encoding, would result in a dramatic improvement in speed within a company intranet.
Where programmers are willing to accept T/TCP as a solution to their applications, only minor modifications are needed for the application to become T/TCP-aware. For client-side programming, it involves the elimination of the connect and shutdown function calls, which can be replaced by adding the MSG_EOF flag to the sendto command. Server-side modifications involve simply adding the MSG_EOF flag to the send function.
Research into T/TCP suggests it is a protocol that is nearly, but not quite, ready to take over transaction processing for general usage. For T/TCP alone, more work needs to be done to develop it further and solve the security and operational problems. Security problems can be solved using other authentication protocols such as Kerberos and the authentication facilities of IPv6. Operational problems can be dealt with by building greater transaction reliability into the applications that will use T/TCP, such as two-phase commits and transaction logs.
Applications can be easily modified to use T/TCP when available. Any applications which involve an open-close connection can use T/TCP efficiently, and the more prominent examples would be web browsers, web servers and DNS client-server applications. To a smaller extent, applications such as time, finger and whois can benefit from T/TCP as well. Many networking utilities are available that can take advantage of the efficiency of the protocol. All that is needed is the incentive to do it.
Perhaps a more immediate task, though, is to port the T/TCP code to the new Linux kernel series, 2.3.x.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- Stunnel Security for Oracle
- The Firebird Project's Firebird Relational Database
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide