The following was posted to comp.os.linux.admin in late July by Cor Bosman:It has come to my attention that there is something bad going on with the /etc/gateways file on a lot of Linux machines. In an attempt to set up routed on machines all over the world. After some investigating these seemed to be all Linux machines. I was then told by someone on comp.os.linux.admin that the /etc/gateways file on most Linux distributions (at least all the way since the first SLS distribution) have the following information in /etc/gateways as an “example”:
> net microwalt gateway metallica passive > net hacktic gateway 18.104.22.168 passive > net default gateway 22.214.171.124 active
126.96.36.199 is one of our Suns. I have never given permission to include this in any Linux distribution. Since our domain pays for incoming traffic also, this is costing me a lot of money for no reason. I would therefore like to ask everyone running routed to check their /etc/gateways, and remove both lines mentioning 188.8.131.52. Even if you don't run routed, I'd appreciate it if you remove those entries. I would also ask that this be removed from all future Linux distributions. Knowing who is responsible for the questionable act is also something I'd be happy to hear.
So: Please check your /etc/gateways file! There is a big chance it will have illegal entries since I did not give permission to use those defaults. This may cause unnecessary traffic for both you and me. —Cor Bosman
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- A Topic for Discussion - Open Source Feature-Richness?
- Tech Tip: Really Simple HTTP Server with Python
- Please correct the URL for Salt Stack's web site
2 hours 59 min ago
- Android is Linux -- why no better inter-operation
5 hours 14 min ago
- Connecting Android device to desktop Linux via USB
5 hours 43 min ago
- Find new cell phone and tablet pc
6 hours 41 min ago
8 hours 9 min ago
- Automatically updating Guest Additions
9 hours 18 min ago
- I like your topic on android
10 hours 4 min ago
- This is the easiest tutorial
16 hours 40 min ago
- Ahh, the Koolaid.
22 hours 19 min ago
- git-annex assistant
1 day 4 hours ago