Best of Technical Support
I've been trying to configure iptables to work properly with
incoming SSH and FTP. For some reason, every time I want to FTP from a
remote site, I have to disable the POLICY for the INPUT chain. Can you
explain how to deal with this issue—configuring FTP and
iptables together without having to disable the policy? I'm running
Red Hat 8.0.
Without having your list of rules it is difficult to find the problem, but clearly some of the rules (in the INPUT chain) are preventing the traffic. Try adding LOG rules before each actual rule (in /var/log/messages) to see which one is causing the packets to stop. For example:
iptables -A INPUT -p TCP -s 0/0 -d 0/0 \ --dport ftp -j LOG --log-prefix "FTP :" iptables -A INPUT -p TCP -s 0/0 -d 0/0 \ --dport ftp -j ACCEPT
You should read up on firewalling and FTP. Basically, FTP is a hard protocol to filter, and actually it's two protocols in one, depending on the client. Active FTP is not too hard to filter on the server side; you simply need to allow incoming connections on port 21 (the control connection). For passive FTP, however, the server doesn't open the data connection to the client; the client opens the data connection to you on some high TCP port (>1024). With iptables, you can make use of connection tracking, which opens only the one port used for that FTP connection:
iptables -A $IF -p tcp --dport ftp -j ACCEPT iptables -A $IF -p tcp --dport 1024:65535 \ -m state --state RELATED -j ACCEPT
You also have to load the ip_conntrack_ftp module for the above
to work (modprobe ip_conntrack_ftp).
How can I manually time synchronize my computer? When I
install my distribution, Mandrake 9.0, it lets me choose an
NTP source, but I don't leave my machine powered on all the time.
How can I manually sync to be sure its happening?
Simply run ntpdate timeserver.
This command synchronizes your time to the
time server and also reports how far off
your clock was. You probably should follow this by
saving the time to your hardware clock to preserve
it if you reboot: hwclock --systohc.
I had Red Hat 7.1 installed on my PC, with another
partition used for Microsoft Windows. I recently
re-installed Windows using mssetup.
When the system reboots I am not being asked whether
to switch to Windows or Linux. Now
the system starts up directly in Windows. Is there
some way to restore Linux?
Kunal S Doddanavar
Windows removed or disabled the Linux bootloader,
which is LILO on Red Hat 7.1. Boot with your
rescue floppy, mount your Linux root partition
with, for example, mount /dev/hda1 /mnt
and run lilo -R /mnt before rebooting. If you were
running GRUB, grub-install should do the trick.
On newer Red Hat distributions that use the
GRUB bootloader, boot from the rescue floppy and
re-install GRUB with grub-install.
If you didn't make a boot disk, boot with the
first install CD in rescue mode.
I am using Red Hat Network to upgrade my software
and keep it current. I have allowed the up2date
program to include my kernel. Now my /boot
partition is getting too full. How do I remove
some of the old kernels? I really don't think I
need five different kernels in /boot.
Simply remove the undesired boot images. You could
run rpm -qa | grep kernel to find which kernel
packages you have installed, and use rpm -e to
remove the older ones. As a suggestion, keep at least two options, so that if
something goes wrong with the current one you have
This is not only okay, it is a good administration habit. You should
keep only useful kernels around, and generally only two are required: the
primary kernel file and a backup in case something happens to the primary.
Saving as many versions as you have is rarely necessary
unless you have special requirements, such as if you are developing and
testing kernel drivers.
How do I mount a USB flash drive? I can see my flash drive when
I check /proc/bus/usb/devices/. When I run the hardware browser, it shows up
as hda4 (fat32), but I can't mount it or access the files.
It looks like you do not have the usb-storage driver loaded, which is
needed for this device. Take a look at the Linux USB Guide at
www.linux-usb.org for more information on how to
load the proper drivers and mount the device.
My video card is a built-in Intel 82845G/GL that
fails with Linux (Red Hat 8.0). Linux probes it during
installation but fails to start up in graphic mode; startx shows a fatal
Searching on Google, I found a page on
how to configure a system with this video card,
Upgrade the listed packages, then run
Telnet and SSH connections seem to time out and and I get
disconnected. I use tcsh for my shell, and the pty device I am logged
in on is listed in /etc/securetty. This is not an issue with autologout.
Even if I disable autologout, the connection still is dropped after about
an hour. When this happens, the user still is listed as being logged in
and the shell still is active. It has to be terminated by killing its process
This smells of a firewall-level issue. In common NAT and masquerading
setups, if there is no traffic on a link for some time the router
will forget about the connection, assuming it was closed improperly.
This is because some clients do not issue closure requests correctly,
and it would be unwise to allow these stale connections to continue to
tie up kernel resources.
You may be going through a NAT gateway that expires idle TCP connections after one hour of inactivity. Try (as root):
echo 600 > /proc/sys/net/ipv4/tcp_keepalive_time
Then, when you use SSH, you should ask for keepalive TCP packets to keep the connection up:
ssh -o 'KeepAlive=yes' targethost
You can save typing and put:
ProtocolKeepAlives 300in ~/.ssh/config to make SSH send keepalive packets for all connections every five minutes.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- Non-Linux FOSS: libnotify, OS X Style
- UX Designer
- RSS Feeds
- It is quiet helping
41 min ago
58 min 4 sec ago
- Reachli - Amplifying your
2 hours 14 min ago
3 hours 3 min ago
- good point!
3 hours 6 min ago
- Varnish works!
3 hours 15 min ago
- Reply to comment | Linux Journal
3 hours 44 min ago
- Reply to comment | Linux Journal
6 hours 10 min ago
- Reply to comment | Linux Journal
10 hours 10 min ago
- Yeah, user namespaces are
11 hours 26 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?