Point/Counterpoint - /opt vs. /usr/local

Should a sysadmin put additional software in /usr/local or /opt? Bill and Kyle argue the one true location for third-party software.

This month, Bill and I take on one of the classic holy wars between Linux geeks: /opt vs. /usr/local. If you look at the current Linux Filesystem Hierarchy Standard, you will see that both /opt and /usr/local are represented there. If you read further, you will see that they both seem to have a similar purpose: a place to put software that's not part of the standard system. Even though both directories are designed for similar purposes, they each go about that task in different ways. Well, there's nothing quite like two very similar things with subtle nit-picky differences to spark a debate between geeks.

Bill: So what's wrong with /opt?

Kyle: There was nothing wrong with /opt, back when it was set up. You know, back in the days when tar was your package manager and dinosaurs roamed the earth.

Bill: “Back when it was set up.” Oh, here we go again, another “Bill's older than dirt” comment.

Kyle: I'm not saying you're older than dirt, I'm just saying I've seen your yearbook pictures with dirt. Anyway, back then, it made sense for someone to package everything up in one big directory and dump it under /opt. But some time in the last two decades, most Linux distributions started using more sophisticated package managers.

Bill: Ahem. I rather like using /opt. It's nice having a distinct delineation as to what's installed by the distribution and what's installed by the admins after the fact.

Kyle: Wow, I totally agree with half of that statement.

Bill: Hey, there's a first. And it's in print too. Whoohoo!

Kyle: It is nice having a distinct delineation of what's part of the distribution and what the admin installs—for me, it's just in /usr/local.

Bill: This is the first time I've heard you, of all people, advocate more typing.

Kyle: Your system packages can dump their files in /usr, and any third-party packages can put things in an identical directory structure under /usr/local; however, because these days we aren't using tar, but programs like rpm and dpkg to install packages (and their yum and apt front ends), we have a much more sophisticated way to see what is installed and where it's installed, beyond just the ls command. Even then, using ls, I can see that a particular binary is in /usr/local/bin and, therefore, must be part of a third-party package.

Bill: I may be arguing semantics here, but that's what Point/Counterpoint is about. To me, /opt is for “options”—stuff that's post-install. /usr/local implies that its local to that machine. To me. Your “ls” point also applies to /opt, except the path is shorter, and you can't assume that everyone will be using rpm and dpkg.

Kyle: The path is shorter, eh? Well Bill, thanks for the setup.

Bill: What if you compile things from source, and don't want to go through the added steps of making a .deb? The bottom line is that there is no real “standard”. All the admins I've seen tend to have their own spin on this.

Kyle: Once you start using /opt, you can count on your system paths increasing exponentially. With /usr/local, my library paths and my binary paths need to add only one entry (an entry that probably is already there).

Bill: Exponential? Only if you're installing a crazy amount of software, man. I rather like knowing that if I'm going to be building, say, a Java application server, that my JDK is always in /opt/jdk (I typically have a symlink that points to the real JDK, like /opt/jdk_sun_1.6.0.17. That way, JAVA_HOME is always /opt/jdk. Any other packages, say a custom-compiled apache, can live in /opt/apache.

Kyle: But if you installed the JDK in /usr/local (not that Sun would ever approve), you could have all the libraries in /usr/local/lib and Java binaries in /usr/local/bin, and you could just use your regular package manager to see where things are installed.

Bill: That's only a couple paths. You're assuming that these things are packaged by the software maintainer or that I want to go through with making packages. Lots of times software's not packaged.

Kyle: It's an extra (and in my opinion, proper) step when you are deploying software to your servers, but it's a step that makes sure you can automatically handle dependencies and can use standard tools and not tar, cp and rm to add and remove packages.

Bill: Whoa, you're calling tar and cp “not standard tools”?

Kyle: Standard packaging tools. Let's take your apache example. If I wanted a custom apache, I'm going to have to compile it, right? All I have to do is use the --prefix option to change where it is installed from /usr to /usr/local. In my opinion, that's easier than the /opt approach.

Bill: It's rather nice to be able to take a completely working server and just rsync its directory to another box.

Kyle: Again, I suppose if you are a closet Solaris or Slackware admin, tar, cp and rm are your packaging tools, but if your add-on programs are in packages, you can just use a standard package manager.

Bill: Yes, if there's a package for it, or you want to go through the work of making one.

Kyle: That's what I think this argument ultimately comes down to: the traditional (and ancient) approach to install software before proper package managers came on the scene versus the modern way to deploy software on a server.

Bill: There are times when packaging is appropriate. If you've got lots of time to prepare, or require a lot of validation and control, then sure, package away.

Kyle: The modern way is to use package managers, so dependencies are easily managed—adding, removing and updating packages is managed, and there's an audit trail. The traditional way is just to untar or copy files around and hope they work. Plus, with the traditional way, you tie up extra space by sharing fewer libraries and just bundling everything together in each package, even if another package might use the same libraries. The work of doing it “right” is work up front that saves you a lot of work down the road. I think what it comes down to is that Bill has a soft spot for /opt from all his years at Sun.

Bill: Hey, just because I feel nostalgic when I see an /opt/SUNW type path doesn't mean I don't have a point. On a development system, or a sandbox, having an /opt directory where you can just toss things and see if they work makes a whole lot of sense. I know I'm not going to go through the effort of packaging things myself to try them out. If the app doesn't work out, you can simply rm the /opt/mytestapp directory and that application is history. Packaging may make sense when you're running a large deployment (there are times when I do package apps), but lots of times, it's nice to toss stuff in /opt.

Kyle Rankin is a Systems Architect in the San Francisco Bay Area and the author of a number of books, including The Official Ubuntu Server Book, Knoppix Hacks and Ubuntu Hacks. He is currently the president of the North Bay Linux Users' Group.

Bill Childers is an IT Manager in Silicon Valley, where he lives with his wife and two children. He enjoys Linux far too much, and he probably should get more sun from time to time. In his spare time, he does work with the Gilroy Garlic Festival, but he does not smell like garlic.

______________________

Kyle Rankin is a systems architect; and the author of DevOps Troubleshooting, The Official Ubuntu Server Book, Knoppix Hacks, Knoppix Pocket Reference, Linux Multimedia Hacks, and Ubuntu Hacks.

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

/opt or /usr/local

Jeffrey Moncrieff's picture

I usually put small type of applications in /usr/local like clients types applications and servers type applications in /opt. /opt could possible be a nfs or samba mount which makes disaster recovery more easier of that mount. Now having said that you would want to make sure that share was located on a system that has a redundant backups ie RAID.

What about /srv vs. /var/lib ?

Anonymous's picture

... and /home vs. /usr/home ? And /usr vs. /usr/local ?

An alternative view point just to mix it up.

Mike.Tiernan's picture

I agree with both Bill and Kyle in their discussion of /opt vs. /usr/local (if such a thing is allowed) but I have yet another take on it given to me by some of the brains I used to work with who had a keener grasp of the idea of multiple computers vs single standalones.

If I am running a system as a standalone machine (even if it is networked to others of it's ilk) but shares no common data/code then /opt is a symlink to /usr/local

However, if I have a set of systems that have locally unique files, they, as Kyle said, go in /usr/local but the packages/tools/files that I build in one place to make available to all the systems in my "cluster" (not cluster computing, just grouped) then all these files and things go into /opt which is a shared mount point as Bill discussed.

And, if you are really good, you'll have /opt for different flavors of systems, such as opt.x86_64, opt.i386, opt.sol9, opt.irix, etc. and they automount as needed.

A simplistic example of the difference would be something like this:
Say you have four Linux hosts, one in Buffalo, one in Boston, and two in the central office in Dallas. The weather data from each of the sites is automatically stored in a data file in /usr/local because two of the three sites don't need to know about the data for the remotes. The program which reads this data, is the same binary and is found in /opt on each machine. Each of the systems can generate the local data trending information and store it in /usr/local but none of the systems can write to /opt.

Yes, it's a contrived example but I hope it illustrates the point reasonably well.

Does the use of a "formal" packaging system such as RPM or DEB change this? No, it just makes it different.

So, in closing, in my opinion, it's a draw!

(If any sysadmin counts the six extra characters in /usr/local as an impediment to their job, they should be hanging it up right now. I question if any decent sysadmin can even measure the difference in time between typing /opt and typing /usr/local it is so small.)

Thanks for all the conversations!

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState