Anatomy of a Break In

by Monta Elkins

His careful attack won't be discovered for three days. Checking the logs from the scanner he left running overnight, he finds a suitable target on the east coast of the US; fast processor, lots of drive space but, most importantly, a fat pipe to the Net. It's running the right version of named, the one with the buffer overflow that hasn't made bugtraq yet. No patches are even out for it yet.

He runs the script he pulled off the H4x0rs board: he's in. ("H4x0r" is one of a number of creative misspellings often used in computer cracking discussions.) He edits the logs and gets to work installing the root kit to make sure the sysadmin can't kick him off even if he's found.

Now it's time to make sure the machine is secure. He doesn't want his rivals taking this hacked computer away from him. He disables named and, to make sure it won't come back on reboot, he removes /etc/init.d/named. He remembers that there might be a buffer overflow with lpd too, so he disables it the same way. "I should send them a bill for security consulting", he chuckles while "stealing" the machine.

He creates a couple of directories recommended in the root kit, /dev/ttyyy is easy for the rightful sysadmin to overlook. His friends recommend /dev/..., so he creates one of those as well.

The root kit includes a loadable kernel module. He's not sure exactly how it works, but the cracking board says its very new and very 'l33t. What the kernel module does is make discovering the new files placed on the system difficult; it hides them and hides itself. lsmod, which lists the currently loaded kernel modules, shows nothing. It also hides files and processes beginning with .kore. He installs the trojaned identd included in the kit. He doesn't know that /usr/sbin/in.identd is a symlink to /usr/sbin/identd.d, and he copies it over as both. He changes the user and group of one to root and, in his haste, forgets to change the other. He's anxious; this is only the second computer he's ever broken into, and he's worried he might be discovered before he gets completely setup.

He installs his IRC server; it goes easily. That will earn him some "cracker currency", goodwill, for proving a service to his peers. He's a little worried about using the trojaned identd to get back into the system, so for good measure he edits /etc/inetd.conf to open a root shell on port 24452. When he tries to restart inetd he can't find it. He figures out that the new Red Hat Linux 7.0 uses the more sophisticated xinetd. He goes off to educate himself on xinetd. When he figures it out, he uses it to open his root shell port. Now, getting root on the machine is as easy as telnet hacked_computer 24452.

Happy with what he's done, he logs out and starts spreading the word of his newly 0wn3d machine. After looking around on his new (though physically remote) computer for a while, he logs out and starts checking his scanner logs for another host. He'll revisit it several times over the next few days, performing various system administration tasks.

Discovered

Three days later, a system administrator comes in and checks his morning logs. He's been tweaking his logcheck for a couple of years, and it usually spews very little garbage. Today, his automated log reporting shows that someone has been knocking on his computers' doors. "Nobody at that university should be connecting to this machine", he thinks, so he pulls out his standard "Somebody's hacking me, please make it stop" letter. He pastes in the lines from the syslog and starts looking for his counterpart at the university to send it to. Eventually it finds me.

Good Morning, You've Been Cracked

I get into work Thursday morning and start reading my logs. They're all e-mailed to me, which helps make sure I see them; but it makes for a lot of e-mail. I get Tripwire notices and logchecks, Sendmail bounces and various other crond-generated messages from a dozen or two workstations, mostly Solaris on Suns and Linux on Intels. I start a couple of Tripwire updates and question the number of brain cells I actually have for not being able to get certain noise out of my logcheck e-mails, no matter how hard I try. "egrep on Solaris must be broken", I think. I flirt with the idea of actually reading my copy of Mastering Regular Expressions, realize it's at home and instead start reading my other e-mail in reverse chronological order.

First I notice an e-mail from Mike (name is changed to protect the innocent), a colleague of mine, saying he had shut down the Ethernet interface on a computer he had recently set up, and maybe that would hold them for a little while. Mike was out for the day, but logging in from home, he read the common sysadmin e-mail we share and had started damage control. That sounded serious.

After reading the rest of the related e-mails, I figured out part of what was going on and went to the cracked computer and sat down at the console. Time to do some of that Linux sysadmin stuff they pay me for.

I didn't really know much about the computer in question. I logged on. who and w, showed no one logged on but me. I checked last--nothing. Having dissected several cracked boxes before, I knew better than to really trust anything it told me. System tools are often replaced with versions that lie. I hate it when "my" computer lies to me. But I also know that crackers seldom replace all the tools, and when they disagree it's a clue. So I keep checking and rechecking. I try netstat. It shows a couple of recent connections, including an IRC port. My department doesn't run any IRC servers. Bingo. That tells me something is really amiss with this computer. I know the machine has been "0wn3d" ("owned", that is to say taken over by crackers.) I open a file and start taking notes, verbally expressing my dissatisfaction to myself under my breath and acknowledging its going to be a long morning. I cut and paste the info from netstat into the file. I've captured half a dozen IP addresses. One may belong to the blackhat jerk that interrupted my morning e-mail reading. (Note to self: try to find a source of IP-address-seeking cruise missiles.)

Looking for Damage

I want to verify the binaries haven't been replaced with ones that tell less than the whole truth. I should reboot from a known good CD to do my work, but it's an inconvenience, so I decide to take a shortcut--for now. Mike hadn't run Tripwire, so I couldn't use it to check for changes. Probably just as well, if the database wasn't locked down. I know I can use RPM to verify them, if I can figure out which ones are installed. I type updatedb & to update the locate database to help me find stuff. (I just can't function without locate.) locate rpm. I don't see any downloaded RPMs. I keep a fairly recent set on CD and latest on a local NFS box; but I don't think Mike has used those. I go get my Red Hat 7.0 install disks and mount the first one. updatedb is done, and I check again for RPMs. Finding none, I write a quick bash script to rpm -V for all installed packages. rpm -V will check each file on the computer from the package and report if a file is missing or has been altered. I sit and watch as the results appear slowly on the computer screen. It looks like this is going to take a while.

I reach behind the computer and unplug the Ethernet cable. "Let's see 'em hack that", I mumble while using Alt-F2 to log in on a new virtual console. No telling what "bad things" this computer may be doing at this point and, whatever they are, it doesn't need to be doing them connected to the Net.

I look around some more and get my first real break when I cat /var/log/messages: /dev/ttyyy/.kore/botchk1 >/dev/null 2>&1 pops up. That looks very suspicious. /dev/ttyyy and the hidden file, .kore; I start to think perhaps the guy wasn't a real pro (jerk though he was) leaving such an obvious pointer in such an obvious place, /var/log/messages. I cd /dev ; ls -la ttyyy and see the directory was created in the last couple of days. That's sloppy. He should have picked Aug 24, 2000, when most of the /dev files were created. That gives me a date. "Skript k1ddi3", I mumble. "Script kiddie" is a derisive term used to describe crackers of little skill. It implies that they only know enough to execute the scripts other crackers have written, without having knowledge or skill to do much on their own.

I cd ttyyy and ls -la, but no .kore directory. That's odd. The log file says it should be there. "Somebody" is lying to me. I try echo .*; I've seen machines where /bin/ls was fixed not to show certain directories, but the cracker forgot about bash wildcard expansion. The echo trick didn't work. Neither did find . or even my beloved locate. I was beginning to consider other explanations for the log message, when I tried cd .kore. That did it--a whole hidden subtree invisible to other commands "magically" appeared.

"Wow"! That must be a very comprehensive root-kit, to change all those commands I tried unless.... I press Alt-F1 to check on the RPM verification. The binaries check out. ls, find, bash, etc. all pass RPM's checksum. Cool. The cracker must have modified the kernel or maybe installed a loadable module. lsmod shows nothing out of the ordinary, but I know better than to believe anything this r00t3d box tells me at this point.

Loadable Kernel Module

I realize it's time to get a little more serious, so I reboot from my Red Hat install cd in rescue mode. I type mknod /dev/hda2, etc., and start mounting the hard disk partitions. Now things become much clearer. I see that this script kiddie didn't change the dates on the files he created, a very amateur mistake. So I use find to create a list of all files created or modified since the break-in date. Now I see files like /sbin/korebash, /etc/rc.d/init.d/kore.d and /sbin/korelkm.o. There it is, a loadable kernel module. Cool. I can appreciate it, even if I don't like it. I've seen a bunch of cracked machines, but this is the first with a loadable kernel module, and that's what has been hiding kore files from all the system tools. No use in replacing all the system tools if you modify the kernel. With this approach, RPM or even Tripwire wouldn't find changes in the binaries for ls, etc., because there were none. Nice. Now that I think back, I distinctly remember ls taking longer than I thought it should in small directories. Perhaps that loadable kernel modules wasn't as efficient as it should have been. If the script kiddie had been a little more careful with dates and syslog messages (combined with a more efficient loadable kernel module), I might not have really found out about the hacks for hours more, or possibly missed them all together, writing off the whole attack as some kind of IP spoofing or other magic.

My find also showed that /etc/xindet.d was modified on the same day as the other files were created. It showed that a root shell on port 24452 had been opened. All anyone had to do was Telnet to that port on this computer, and they would have root access. I went back to my desktop workstation and fired up nmap. nmap is a port scanner that makes it easy to check for open ports on a number of machines. I told nmap to check every machine in the subnets I control, looking for open port number 24452. A couple of other devices showed up, but no Linux hosts; I decided that this was the only machine that was owned. I started to make a mental note to regularly scan all open ports on all the machines I oversee and flag any differences. But I remembered that I had already made such a note after dealing with a break in on some Sun Solaris machines I manage a few months ago. Perhaps later. Yes, it would have been a simple matter for the cracker to open root shells on different ports on different computers, but usually it's the same thing on every computer they break. (Why should they have to remember what port goes with which IP?) Besides none of my other Linux boxes were unpatched and none ran named.

It was fortunate this attack was executed sloppily, and even my shoot from the hip, "I'm not in the mood this morning" analysis found it. If I had been really worried about it, I probably would have downloaded CERT's checklist and worked from there, but this machine was rather new, and I didn't think there was any production services or unique data on it yet. Mostly, I didn't want to spend a lot of time dealing with it.

The RPM report finished and I found two missing files, /etc/rc.d/init.d/named and /etc/rc.d/init.d/lpd. At first glance I thought perhaps they were removed by Mike to add some security to this computer by removing unused services. When I checked with him later, I found he hadn't done anything beyond the standard install. That was the clue I needed to find the crackers entry point. It seems to be standard practice anymore for crackers to "repair" whatever hole they came through. Pretty thoughtful you might think, but it's really just a way to keep others from taking the machine away from them. I checked the portsentry logs from some other computers on the same network and saw named probes from three days ago from one of the IPs I had found with netstat. That's probably my cracker.

I wrote a quick summary of the attack, modified files I had found, (open ports, etc.) and sent it to a guy that coordinates computer system security on campus. I also copied it to the local Linux users group and few other sysadmin friends that run production Linux machines. That would give them a heads up on what to look for to see if their machines had been compromised, and also serve as a reminder to take system security seriously.

What did the attacker really do, in what order, when he cracked this machine? I'll never know for sure; I wasn't watching the machine closely enough nor did I analyze it enough; but this article represents my best guess. Although this particular machine wasn't that important, I run many Linux machines that are, and analyzing this one helps me keep the other ones safer. This is only one account, but it should give you an idea of what you might find if you need to work on a cracked computer.

I left the computer unplugged and went back to my real work. The next day, I went in to ask Mike if he could leave the cracked computer as it was until after the weekend, so I could take a closer look at it. I found him already re-installing Red Hat Linux from scratch. I walked back to my office and gave him my "lockdown CD" to use after he finished. It contains the latest RPMs and a script that installs them from the CD, tightening security all around. I was a little disappointed that I couldn't look at this cracked machine more thoroughly, but by the time I had finished dealing with it, it had taken at least half a day.

What should you do to tighten security on your servers? My advice is pretty standard. Keep up with the patches; they are published for a reason. The blackhats read those notices, so you should too. Turn off all services you do not need on your workstation. Either remove them from /etc/rc.d/rc3.d and /etc/rc.d/rc5.d, or, more appropriately. rename them so the they begin with a "K" (to kill them at that run level) instead of an "S" (that means start them). Quit using Telnet (turn it off) and start using ssh to keep people from sniffing your passwords. Download portsentry and install it. So far I've never had a machine cracked that was running portsentry. Actually, I've been thinking of tying portsentry from various machines together, so that if one takes an illegal hit, the other machines will route the attacker to /dev/null before being scanned. That would allow a machine that wasn't running NFS, for example, to help protect a machine that had to have NFS running, if the machines were scanned in the "right" order.

Use NTPDATE or xntpd to set the system time on your workstation. If you are ever cracked, having all the times synchronized in the log files makes it much easier to compare time among multiple machines you manage, and among other sysadmins nearby who may have the same problem. Become familiar with Bugtraq, rootshell, Cert and SANS. All are great security resources.

This machine was broken because someone didn't have the time to set it up right (don't snicker, we've all done it). But you know what they say: If you can't find the time to do it right, how will you ever find the time to do it over?

Monta Elkins is a systems administrator for Virginia Tech.

Load Disqus comments

Firstwave Cloud