An Introduction to Using Linux as a Multipurpose Firewall by Jeff Regan is exactly what its title suggests. Whether you want a firewall at home or office to give you the security you need in order to stop worrying about crackers entering your system, this article tells you just how to set it up. From configuration to locking it down, all the details are here.
Network Monitoring with Linux by Tristan Greaves is another introduction, this time to a freeware software package called NOCOL, designed to keep your system stable without endangering the security of your system. NOCOL does not need to run as root. Complete instructions are given for installation and configuration, as well as final tweaking to get things running smoothly. NOCOL will analyse your system and keep you informed on how it is running.
LUIGUI—Linux/UNIX Independent Group for Usability Information by Randy Jay Yarger has a long and descriptive title. LUIGUI is a new Linux group that has been organized to look at user interfaces and help formulate a standard in an effort to ease the way for Linux to move onto the desktop. Find out all about it and how you can help.
UNIX Shells by Example is a book review by Ben Crowder. Ben describes this book as a “must-have” for those wishing to learn shell programming. Learn why by reading his review.
Welcome again to another sporadic episode of Stupid Programming Tricks! Fate conspires against our would-be monthly column, but we get fired up to do it again. Last month we recklessly forked and killed processes to play midi files, before burning CPU cycles like venture capital by playing MODs and S3Ms with the MikMod library. If we're quite clever, we can figure out how to put the simple playmidi or MikMod calls into, for example, the scrolltext demo from last December. Well, it would look cool. Still, since we've touched on audio already, let's finish up with it before we get into something else exciting.
Digital audio in Linux comes to us by way of /dev/dsp, which shows up as a file but is actually an interface to your sound card. The kernel interface makes dealing with /dev/dsp fairly easy, if a tad latent. You just open it as you would a normal file, set some parameters, and make ioctl calls, a bit like filling address and data registers before calling a library function in assembly code, not that anyone would do that anymore... (haha, what was it, a whole year ago you last used asm?) So, we get all the thrills of appearing to do something exceedingly clever, while we're actually just following procedure. The audio half of Linux multimedia does exist, and it is easy to use; it's just been a tad ignored on account of visual preoccupations.
If you wish to make sound truly from scratch, you must first invent the universe and compile your kernel for sound support (or insmod the right module with correct IRQ and DMA values). Hopefully, you already have a universe and sound support (find out with cat /dev/sndstat); otherwise, prepare to be frustrated. Compiling your kernel for sound support is a royal pain, so check the Sound-HOWTO and perhaps also the Kernel-HOWTO. For now, let's assume (read: really, really hope) you've already got sound working.
The first thing to do, when you want to use digital audio, is to open your audio device, which is accomplished by using open on /dev/dsp. We'll start simply with playing sound, rather than recording and playing back, so we just need to set the WRITE_BITS (8 or 16), WRITE_CHANNELS (mono or stereo) and WRITE_RATE (typically 8000Hz, 22050Hz or 44100Hz). For clear sound quality, having 16 bits is most important (exponential quality improvement for linear CPU cost), followed by sampling rate (linear quality improvement for linear CPU cost), followed by stereo (enables cool effects at double the CPU cost). Obviously, this is a gross generalization and everyone knows we have to balance the elements, so I recommend 16 bits at 22KHz mono for optimizing performance for CPU cost. However, unless you have a computer from the neolithic, you can afford full quality stereo.
The way audio works is rather simple—all sounds are just collections of different frequencies. You can break down essentially any periodic function into a series of sine functions, the technique known as Fourier analysis. Conversely, you can create anything out of a series of sine functions. At a very simple level, if you want to hear a pure 220Hz tone, just play a sine function that repeats 220 times per second. To play an octave higher, just play a sine function that repeats 440 times per second. To play an octave chord, add the two functions together. (If you have a graphing calculator, you can add sines of different periods together and see the results.) This is additive synthesis, a simple, resource-intensive idea that is also the most powerful and flexible synthesis technique. I thought I'd share that with you, since we'll use simple additive synthesis in our demo to generate a wave table to play via /dev/dsp.
How does this work? Your speaker vibrates according to the signals it receives from your sound card, and as anyone who lives with dying appliances knows, vibrations make noise. If the speaker moves forward and back in a perfect sine pattern many times each second, you'll hear a pure tone at the frequency corresponding to the speed of the impulses (440 times each second would be A 440, the most common frequency of tuning forks). So, the values in digital audio are just amplitude data for the speaker, and these values are ultimately just composites of many, many sines. When dumped to the speaker, these generate complex tones, producing familiar sounds like human voices, snare drums and brass ensembles. All digital audio, including CDs and mpegs, works this way.
In our example code, we'll generate an additive wave table of a chord using sine tones. The function for equal-tempered, 12-tone intervals is simply freq*(12th root of 2)<+>n<+>, where n is how many intervals up you want to go from freq, your starting frequency. For an A major chord (meaning the 1st, 5th, 8th and 13th tones of the 12-tone scale, the 13th tone being an octave on top) starting at 220Hz, these are our values: 220 277 330 440.
We'll generate the wave table by adding the sines together. (Remember, our wave table contains 44100 16-bit values (88200 bytes), which is exactly 1 second of audio data at 44.1KJz 16-bit mono.) Then, we'll open the digital audio, loop for a few seconds while playing our chord, then close up shop and go home. By replacing our wave table with an audio file, we could add a sound effect to a game, such as the “intoxicating” sound that occurs after blasting one of the turrets in Fleuch. The code for our project is in Listing 1.
gcc -Wall -O2 sound.c -lm -o sound
meaning gcc, warnings all, optimization level two, from the source sound.c, linked to the math library, producing executable object named sound.
Getting Started with DevOps - Including New Data on IT Performance from Puppet Labs 2015 State of DevOps Report
August 27, 2015
12:00 PM CDT
DevOps represents a profound change from the way most IT departments have traditionally worked: from siloed teams and high-anxiety releases to everyone collaborating on uneventful and more frequent releases of higher-quality code. It doesn't matter how large or small an organization is, or even whether it's historically slow moving or risk averse — there are ways to adopt DevOps sanely, and get measurable results in just weeks.
Free to Linux Journal readers.Register Now!
- Hacking a Safe with Bash
- Django Models and Migrations
- Secure Server Deployments in Hostile Territory, Part II
- The Controversy Behind Canonical's Intellectual Property Policy
- Home Automation with Raspberry Pi
- Huge Package Overhaul for Debian and Ubuntu
- Shashlik - a Tasty New Android Simulator
- Embed Linux in Monitoring and Control Systems
- KDE Reveals Plasma Mobile
- diff -u: What's New in Kernel Development