The Linux /proc Filesystem as a Programmers' Tool

Manipulating all manners of runtime state information by using file-level system calls and commands.
Conclusion

The process filesystem provides all who make use of it with a wealth of system-level information. The ability to manipulate all manners of runtime state information by using file-level system calls and commands, such as cat(1) and echo(1), make proc a high priority candidate for inclusion in anyone's Linux toolkit.

Joshua Birnbaum began his system administration career in 1994. An addiction to SGI led to Sun and then to Linux. From there, he broadened his horizons by branching out into contract sysadmin, public speaking, UNIX/Linux systems programming and now writing for magazines. He can be reached at engineer@noorg.org.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Only user space /proc file system description

Anton's picture

I've found this article because I work with some GPL published kernel code of MIPS-based device. What I'm trying to do is to interact with /proc file system to hack the hardaware. Since the application is not GPL-ed, only the kernel, I can only play with the kernel source code. But everything on the article is from user space point of view. Anyway, great introduction to /proc. Thanks

sscanf vs. hash tables

Lincoln's picture

I've just started to do more detailed programming with Linux and have also seen proc files processed with GHashTables. This seems like it would work very with for stuff that lists off stuff in the "Desc: value" format. Later you can just do lookups on the hash table. I realize that that would be using a glib.h stuff, and is somewhat tied to gnome, but for now, lets not debate that part, and I'm working on gnome projects. I just like to know if this is considered a valid way of processing the proc files. Thanks.

Lincoln

Damn, parsing proc files is j

Walter Stryder's picture

Damn, parsing proc files is just so much easier with Perl, and can be used to produce real nice output.

Re: Damn, parsing proc files is j

Josh Birnbaum's picture

> Damn, parsing proc files is just so much easier with Perl,

Originally, ifchk was a perl(1) wrapper around netstat(8) and ifconfig(8) (minus about 60% of it's current functionality).
I decided to rewrite and extend the program in C for several reasons. One was runtime performance.

Also, if I had written the Linux proc routines in perl(1), that would have required a glue layer to bind the C code to it. I don't want this added overhead, runtime or otherwise.

Also, consider systems that, as a matter of policy, do not/must not have a perl(1) interpreter installed. A firewall is a good example of this.
In my opinion, these systems should be loaded with the bare minimum to allow them to do their job. In this case, filtering packets, doing NAT, etc.
What I'm getting at here is that the addition of programmatic tools, outside of a command interpreter, on such systems, is potentially dangerous. Such additions should be justified.

The proc routines that I provide in the article can be learnt by anyone willing to invest the time to do so. Additionally, an integral aim of the piece was to also provide some background on secure file access proceedures.

> and can be used to produce real nice output.

netstat(8), with its sscanf(3) calls, is also capable of formatted output.

perl c fussion

Anonymous's picture

man perlxstut

Time the result and see how bad it realy is. Because it's quite acceptable.

this article...

nikhil bharava's picture

It is clear, concise and very informative article. I enjoyed reading it alot. May be in coming times, there would be an indepth article on internals of certain working utilities in Linux like top.

nikhil

adapting to /proc/net/dev is easy

dean gaudet's picture

long before i learned about "sar -n DEV 5 0" i wrote a perl script which handles /proc/net/dev ... and adapting to differing numbers of fields was trivial.

http://arctic.org/~dean/scripts/bandwidth

it's also unclear to me why it matters if /proc/net/dev is a symlink... you're opening it read-only.

-dean

Re: adapting to /proc/net/dev is easy

Josh Birnbaum's picture

> http://arctic.org/~dean/scripts/bandwidth

Thanks for the URL. I'll check it out.

> it's also unclear to me why it matters if /proc/net/dev is a symlink... you're opening it read-only.

It has been my experience that /proc/net/dev has always been a zero byte file system object of type "file" (as opposed to a link/directory/fifo, etc).

The file access sequence I discuss in the article (lstat(2) -> fopen(3) -> fstat(2)) revolves around secure file access. What do I mean by that?
Well, consider this scenario. We first call stat(2) on the file, checking that it's zero bytes in size, chowned root:root, etc. If it passes these attribute tests, the file conforms to our expectations of a legit procfs object. We then call fopen(3) on the file and access its contents. But... there's a problem here. What if the file we stat(2)ed is _not_ the same file we then call fopen(3) on? That is, what if the file was replaced with a different file between the 2 calls? What I'm describing here is a possible file based race condition.

My point is that you *cannot* assume anything when dealing with external input.
In writing these routines, I was doing things like attempting file open operations on jpg images of Ferraris, MS Word docs, PDF files, gzipped tar archives, etc. I wanted to see how ifchk would deal with, what is, in the context of a legitimate /proc/net/dev file, garbage.

I hope this answers your question.

I'm curious what flavor of Li

mikebo's picture

I'm curious what flavor of Linux the author targeted. My make failed spectacularly under Fedora Core 3.

Re: I'm curious what flavor of Li

Josh Birnbaum's picture

> I'm curious what flavor of Linux the author targeted.

I did the ifchk Linux port mostly under the 2.4 kernel, from 2.4.22 onwards. I'm now running 2.4.31.
I'm currently trying to get access to a 2.6 system, to test on.

> My make failed spectacularly under Fedora Core 3.

Does the make output look like what's at the URL below:

http://www.noorg.org/ifchk/news/05032005b.html

Drop me a line at engineer@noorg.org and we'll work on this.

the nice /proc file system

Ioan Gartner's picture

- I think /proc is a great idea and somehow in line with UNIX overall philosophy (all is a "file").
- I regret sometimes that the layout and the values of the/proc file system change from one OS version to the other, but maybe evolution requires that, so that's ok with me.
- The real pity is not as much that its layout changes, but the fact that its documentation is often difficult to find and very poor.
- In that respect I think this article is a great idea, that is the reason I am here reading and commenting. Let's get out of "computer middle ages" of the 70es or 80es and recognize that after all an OS is just another application (I agree, a special one inn many respects, it has its own special objects to manipulate, has its own delicate "timing" conditions, it's sensitive HW interfaces etc....) but why so much "mistery" around it?

Re: the nice /proc file system

Josh Birnbaum's picture

> I think /proc is a great idea and somehow in line with UNIX overall philosophy (all is a "file").

This is one of the things that really stood out for me when I started working with Linux. I can now access in-kernel structures as if they were a text file. cat(1), less(1), fopen(3), fstat(2) on a proc file. No problem.

> I regret sometimes that the layout and the values of the/proc file system change from one OS version to the other, but maybe evolution requires that, so that's ok with me.

I comment on this, from within the context of sscanf(3), below.

> The real pity is not as much that its layout changes, but the fact that its documentation is often difficult to find and very poor.

This is where the Usenet archives shine. You can really piece alot together by sifting through the archives at groups.google.com. It's a _very_ powerful resource.

> In that respect I think this article is a great idea,..

Thank you. I'm glad you enjoyed it. I certainly had alot of fun writing it.

> ...but why so much "mistery" around it?

The more exploration one does, the more the mystery lessens. This is what I was getting at at the beginning of the article. Systems programming is like being able to view the internals of a running car engine, from all angles, in real time. That's enticing. And, like Usenet, powerful.

Windows has /proc, too

mangoo's picture

In Windows, there is a Windows Registry instead of /proc (more or less).

For those more familiar with Linux - it is possible to view and edit the registry in a /proc-like mode.
All it takes is installing Cygwin on a Windows machine.

Uh, no

Anonymous's picture

The Windows Registry is NOTHING like /proc. /proc gives you real time access to kernel parameters and the resources of running processes. /proc exists entirely in memory.

The Windows Registry, on the other hand, is a database of configuration paramaters. It's NOT real time, it does not contain info on running processes, and it does not give access to Windows kernel runtime parameters.

They are nothing alike.

> Before we begin to talk abo

Matt's picture

> Before we begin to talk about the proc filesystem as a programming
> facility, we need need to establish what it actually is.

The proc filesystem should be used as a programming facility. There are system calls for the same info. The format of the items in /proc can change from one kernel version to the next. Not all systems have /proc mounted. Also, not at flavours of Unix support /proc.

This article is a bad idea.

Re: The Linux /proc Filesystem as a Programmers' Tool

Josh Birnbaum's picture

> The proc filesystem should be used as a programming facility.

I agree.

> There are system calls for the same info.

Parsing proc files in /proc is a common way of accessing
in-kernel structures (netstat(8) and ifconfig(8) do this).
Sifting through the comp.os.linux.development.{system,apps}
Usenet group archive illustrates this.

> The format of the items in /proc can change from one kernel
> version to the next.

The programmer has to take care when dealing with conversion
specification in sscanf(3) calls, etc. He/she has to understand
the format of the proc object that is to be scanned.

> Not all systems have /proc mounted.

That's ok. This is where function return value checks are so important. The lstat(2) call in the article code (line 565)
returns error if it can't get to /proc/net/dev and we exit
gracefully.

> Also, not at flavours of Unix support /proc.

I wrote this piece from the perspective of the Linux system.

> This article is a bad idea.

Thanks for your feedback.

I just need a tool to mon the process

libbyliugang.cn's picture

I just need a tool to mon the process.
But the way to do this is different between WIN/UNIX/LINUX.

In most unix,such as AIX,HP-UX(RX),SUN-Soloar,there are some syscall function to use. Win has the same thing.But in linux, I have found them for long time,but the result is I must parse the /proc file myself, and the format is defferent between defferent version of Linux-Kernel....

These system api is very usefull and very importent,for a large system.

May be we need a standerd of Linux proc filesystem, or a new project to do these.

Not always a bad idea

Michael Jastram's picture

> This article is a bad idea.

You can't generalize that. For instance, most firewall shell scripts use the /proc file system, which is perfectly legitimate. The question is how far you want to push it, and the author is pushing it quite far.

Ultimately, this goes back to the "right tool for the right job" philosophy. And the /proc filesystem definitely has its place in the toolbox.

a tool to program /proc/sys

Anonymous's picture

Here's a tool that lets you program and experiment with your /proc/sys run-time variables without having to deal with cat/echo time. It's ncurses based and, therefore, very fast.

http://freshmeat.net/projects/lkcp

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix