The Term Protocol
Term, originally developed by Michael O'Reilly (email@example.com), is a program that allows multiple, concurrent connections over a serial line. Term allows almost all “standard” TCP/IP applications to be used on a Unix system that is connected by a serial connection to a networked Unix system. Unlike other common serial protocols, such as SLIP and PPP, term does not require non-user administrative maintenance, and requires no modifications to the host kernel. This means that virtually any user with a login shell on a dialup system can utilize network utilities that were once limited to SLIP/PPP users.
Unlike SLIP or PPP, your machine does not have its own IP address. All incoming traffic must be addressed to your remote host, and it will be directed to your local computer by term.
Term essentially works by redirecting packets on your remote host directly to your local Unix system. This allows any incoming network packets to reach your computer by proxy, via your remote dial-up computer. The same basic idea works for outgoing packets as well: local sockets on your computer are redirected to your remote host, and sent on their way to their actual network destination.
The entire term package is a basic suite of utilities and libraries that allow you to establish these network connections. These utilities are:
term: This is the actual daemon that is run on both the remote and local computers. This establishes the bridge that is needed to link your computer to the remote host and the rest of the network.
tredir: This is the most commonly used utility that comes with term. It allows the user to manually redirect an outgoing or incoming port for use with non term applications, for example redirecting the SMTP (e-mail) port so that the user may send or receive e-mail.
tmon: This utility monitors and displays the incoming and outgoing traffic over your serial line. Two bar graphs are displayed showing the levels of traffic, updated each second. This allows you to monitor just how much bandwidth you are using at any time while using term.
trsh: This utility allows you to quickly access your remote login shell, much like rsh or rlogin would allow you to. This allows you to perform routine network tasks from your account if needed.
tupload: Much like sz, this utility is used to transfer files to or from your remote account, depending on which “end” of the term-link it was executed from.
txconn: When you need to display an X application remotely, or have one displayed on your local screen, txconn establishes the needed redirection to make this possible. (The same effect can be created with tredir, as will be explained later.)
Other applications: Recently, a flurry of activity has resulted in a few more term clients such as tudpredir, a udp port redirector; tdate, which sets your computer's time by the Network Time Protocol; and “download, which reciprocates what tupload does.
Before you can actually run term, you should run a utility called linecheck on the remote and local computers.
Linecheck is used to check the ”transparency“ of the link, by seeing which 8-bit characters are transmitted across the link. The results of linecheck are used to configure term to operate correctly and optimally.
To run linecheck:
Using a communications program, log into your account on the remote system and run:
linecheck linecheck. log
Suspend your comm program (^Z under kermit), otherwise it will steal characters from linecheck.
On the local system, run:
linecheck linecheck.log > /dev/modem < /dev/modem
After linecheck has completed its operation, examine the two linecheck.log files. At the bottom of these files will be an indication of which characters you must escape in your .termrc configuration file. The messages in linecheck.log give the characters (if any) that need to be ignored on one end and escaped on the opposite end of the link. For example, if my local results indicated that I should escape 3 4 and 121, my resulting .termrc files would have something like this in them:
Local: escape 34 escape 121
and my remote .termrc:
ignore 34 ignore 121
because I have to ignore escaped characters on the other end.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- New Products
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Nice article, thanks for the
1 hour 32 min ago
- I once had a better way I
7 hours 18 min ago
- Not only you I too assumed
7 hours 35 min ago
- another very interesting
9 hours 29 min ago
- Reply to comment | Linux Journal
11 hours 22 min ago
- Reply to comment | Linux Journal
18 hours 16 min ago
- Reply to comment | Linux Journal
18 hours 32 min ago
- Favorite (and easily brute-forced) pw's
20 hours 23 min ago
- Have you tried Boxen? It's a
1 day 2 hours ago
- seo services in india
1 day 6 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?