Dealing with, um, Wastewater
MARENA, the government agency responsible for the environment in Nicaragua, has asked us to use a biofilter waste water treatment system instead of a traditional septic tank and drain field for the Geek Ranch. The reasoning is that as we are building in a nature reserve, we are being held to higher standards than is typical outside the reserve.
While we don't claim to be waste water system experts, we are geeks so this sounded like a technology challenge. Beyond that, the good news, is that a local friend retired from being a wastewater engineer (even though there are many other titles associated with the job) so we have the resources to combine his knowledge of the, shall we say, material handling part of the system with our knowledge of control systems.
On the control system technical side is Willy Smith, fellow Geek Ranch participant with a lot of engineering experience on control systems. As he is also a Linux geek, Linux seems to be the right answer. What you see here is really the design specification for the geek side of the system.
The TaskFirst, let me define the systems requirements. We need to process the waste streams from a restaurant, hotel and geek cabinas. We had previously decided to separate black water (toilet waste) from gray water (showers, sinks, wash water, etc). What we were going to do was start with a traditional septic system capable of handling the total load of our initial construction. We would then build a gray water processing system (probably using plants) and move the gray water over to that system freeing up septic capacity to support more hotel rooms and geek cabinas.
With the new requirement, we will only use the septic tank for the black water. The output of the septic will then be combined with the gray water stream in a holding tank and processed together. Thus, we have a bit more cost up front but essentially the same long-term system.
One change is that rather than use plant beds to treat the waste stream, we will use a self-contained bio-filter. Wikipedia offers a reasonable explanation of what I am talking about. Where they talk about a tricking filter is the part that that replaces the plant bed.
The way it works is that you fill the filter with something with a lot of surface area such as open cell polyurethane foam cubes. Aerobic bacteria collects on the foam. You spray the effluent over it and the bacteria breaks down the nasty stuff. Solids are settled out and resulting liquid is dispersed in a traditional drain field.
In operation, you want to batch fill the biofilter. For example, you might want to spray five gallons of waste over it and let it do its job. How often you spray it is a function of how much waste you have to process. You don't want to "use up" all the waste as the biofilter would dry out, the bacteria would die and then the system would have to go through a start-up cycle again to re-activate the bacteria.
This piece of system would consist of two tanks and a control valve. First, a large tank (one day combined black and gray water capacity) and a dose tank. The dose tank is a small tank with a full sensor. The control system opens the valve from the holding tank to the dose tank until the dose tank is full and then closes the valve.
The output of the dose tank goes to a spray head that sprays the effluent over the biofilter medium.
There is a need for a control system here but we also want to get as close to the low-tech end as possible for the following reasons:
- We are located in the middle of nowhere. We don't want to have to rely on hard-to-find parts. (That is actually one of the reasons we decided on open cell foam for the biofilter rather than other, more specialized materials).
- We are a Geek Ranch, not a sewage treatment facility. We don't want to need an engineer on staff to run the system.
- We want to minimize electricity use. Ideally, we want a system that can run off an internal battery for, let's say 24 hours.
- We would like this design to be useful to others who need a similar system.
Because the buildings are located at 1370 meters of altitude and over 75% of the property is located at least 50 meters lower, we can take advantage of gravity to move the liquid from tank to tank. Thus, the only power requirements are the control system itself and one valve to fill the dose tank.
The control system really has only two required tasks:
- Open a control valve at the correct interval and for the correct amount of time to fill the biofilter.
- Monitor the system to detect problems such as a clogged biofilter.
It is, however, desirable to produce a log of the operation. This could be used to tell us, for example, when the system was getting near capacity. (You know this from how often the biofilter is being filled.)
The parameters you need to be able to configure for the system are:
- Size of the dose tank. That is how you can determine how much waste you take from the holding tank to spray on the biofilter.
- Minimum dose rate
- Maximum dose rate
- Size of the holding tank
- Alarm conditions (such as high level in the holding tank)
Inputs to the control system are:
- Level in the holding tank
- Dose tank full
- Biofilter full (indicating a fault)
- Settle basin full
- Battery low
- Possibly other sensors for possible faults in other areas
- System start/stop swtich (for cleaning the biofilter, for example)
- Open valve to dose tank
- Panel indicators to show system status (probably using an LCD
or LED display)
- System on
- Storage tank level information
- Dose valve open
- System fault (could also be audible alarm)
- Status report information (detailed below)
Error conditions include:
- Holding tank full
- Holding tank empty
- Dose tank remains full (indicating a clogged spray system)
- Dose tank remains empty (indicating a fault in the dose valve or clogged pipe from the holding tank)
- Biofilter remains full (indicating it is clogged)
- Settle basin full (indicating a clogged drain field system)
The status report is just a chronological log showing system events. These are the times when a log entry would be made:
- System reboot
- Each time a dose is sent to the dose tank
- When a fault condition occurs
- When the fault is cleared
- Event (reboot, dose, fault, fault cleared)
- Level in the holding tank
- If this was a dose, time the dose valve remained open to fill the dose tank
For the hardware, I am thinking about the following:
- VIA EPIA 5000AG motherboard
- USB (or CF) as "the disk"
- Plug-in the connector 12V power supply
That is a fan-less CPU board with a 533MHz VIA C3 processor. The parallel port will be used for the I/O lines we need except for the status display. That will be handled through the RS-232 serial port. For configuration and reading the status log, a web-based interface makes the most sense. We can remotely interface to the system using the Ethernet port to connect it to the network (directly or via a WiFi radio) or add a PCI card with communications radio.
Clearly something like Apache is very overkill. A few years ago I designed a radio station controller using Karrigell, a Python-based application framework that includes its own web server. It is small, easy to understand and works great.
Much like the radio station design, the real-time task that controls the system can just read some saved parameters to know what to do and append log records to the log file. In order to prevent excessive updates to a single location in the flash storage, the log can be saved in RAM and then periodically flushed to flash.
That's it—a Linux-controlled sewage plant. While it may not exciting cocktail time conversation, it does seem like a good solution to a real-world problem. Now, we just need to password protect the status system so our geeks won't try flushing their toilet multiple times to see if the level in the holding tank changes.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Paranoid Penguin - Building a Secure Squid Web Proxy, Part IV
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SourceClear Open
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide