Dealing with, um, Wastewater
MARENA, the government agency responsible for the environment in Nicaragua, has asked us to use a biofilter waste water treatment system instead of a traditional septic tank and drain field for the Geek Ranch. The reasoning is that as we are building in a nature reserve, we are being held to higher standards than is typical outside the reserve.
While we don't claim to be waste water system experts, we are geeks so this sounded like a technology challenge. Beyond that, the good news, is that a local friend retired from being a wastewater engineer (even though there are many other titles associated with the job) so we have the resources to combine his knowledge of the, shall we say, material handling part of the system with our knowledge of control systems.
On the control system technical side is Willy Smith, fellow Geek Ranch participant with a lot of engineering experience on control systems. As he is also a Linux geek, Linux seems to be the right answer. What you see here is really the design specification for the geek side of the system.
The TaskFirst, let me define the systems requirements. We need to process the waste streams from a restaurant, hotel and geek cabinas. We had previously decided to separate black water (toilet waste) from gray water (showers, sinks, wash water, etc). What we were going to do was start with a traditional septic system capable of handling the total load of our initial construction. We would then build a gray water processing system (probably using plants) and move the gray water over to that system freeing up septic capacity to support more hotel rooms and geek cabinas.
With the new requirement, we will only use the septic tank for the black water. The output of the septic will then be combined with the gray water stream in a holding tank and processed together. Thus, we have a bit more cost up front but essentially the same long-term system.
One change is that rather than use plant beds to treat the waste stream, we will use a self-contained bio-filter. Wikipedia offers a reasonable explanation of what I am talking about. Where they talk about a tricking filter is the part that that replaces the plant bed.
The way it works is that you fill the filter with something with a lot of surface area such as open cell polyurethane foam cubes. Aerobic bacteria collects on the foam. You spray the effluent over it and the bacteria breaks down the nasty stuff. Solids are settled out and resulting liquid is dispersed in a traditional drain field.
In operation, you want to batch fill the biofilter. For example, you might want to spray five gallons of waste over it and let it do its job. How often you spray it is a function of how much waste you have to process. You don't want to "use up" all the waste as the biofilter would dry out, the bacteria would die and then the system would have to go through a start-up cycle again to re-activate the bacteria.
This piece of system would consist of two tanks and a control valve. First, a large tank (one day combined black and gray water capacity) and a dose tank. The dose tank is a small tank with a full sensor. The control system opens the valve from the holding tank to the dose tank until the dose tank is full and then closes the valve.
The output of the dose tank goes to a spray head that sprays the effluent over the biofilter medium.
There is a need for a control system here but we also want to get as close to the low-tech end as possible for the following reasons:
- We are located in the middle of nowhere. We don't want to have to rely on hard-to-find parts. (That is actually one of the reasons we decided on open cell foam for the biofilter rather than other, more specialized materials).
- We are a Geek Ranch, not a sewage treatment facility. We don't want to need an engineer on staff to run the system.
- We want to minimize electricity use. Ideally, we want a system that can run off an internal battery for, let's say 24 hours.
- We would like this design to be useful to others who need a similar system.
Because the buildings are located at 1370 meters of altitude and over 75% of the property is located at least 50 meters lower, we can take advantage of gravity to move the liquid from tank to tank. Thus, the only power requirements are the control system itself and one valve to fill the dose tank.
The control system really has only two required tasks:
- Open a control valve at the correct interval and for the correct amount of time to fill the biofilter.
- Monitor the system to detect problems such as a clogged biofilter.
It is, however, desirable to produce a log of the operation. This could be used to tell us, for example, when the system was getting near capacity. (You know this from how often the biofilter is being filled.)
The parameters you need to be able to configure for the system are:
- Size of the dose tank. That is how you can determine how much waste you take from the holding tank to spray on the biofilter.
- Minimum dose rate
- Maximum dose rate
- Size of the holding tank
- Alarm conditions (such as high level in the holding tank)
Inputs to the control system are:
- Level in the holding tank
- Dose tank full
- Biofilter full (indicating a fault)
- Settle basin full
- Battery low
- Possibly other sensors for possible faults in other areas
- System start/stop swtich (for cleaning the biofilter, for example)
- Open valve to dose tank
- Panel indicators to show system status (probably using an LCD
or LED display)
- System on
- Storage tank level information
- Dose valve open
- System fault (could also be audible alarm)
- Status report information (detailed below)
Error conditions include:
- Holding tank full
- Holding tank empty
- Dose tank remains full (indicating a clogged spray system)
- Dose tank remains empty (indicating a fault in the dose valve or clogged pipe from the holding tank)
- Biofilter remains full (indicating it is clogged)
- Settle basin full (indicating a clogged drain field system)
The status report is just a chronological log showing system events. These are the times when a log entry would be made:
- System reboot
- Each time a dose is sent to the dose tank
- When a fault condition occurs
- When the fault is cleared
- Event (reboot, dose, fault, fault cleared)
- Level in the holding tank
- If this was a dose, time the dose valve remained open to fill the dose tank
For the hardware, I am thinking about the following:
- VIA EPIA 5000AG motherboard
- USB (or CF) as "the disk"
- Plug-in the connector 12V power supply
That is a fan-less CPU board with a 533MHz VIA C3 processor. The parallel port will be used for the I/O lines we need except for the status display. That will be handled through the RS-232 serial port. For configuration and reading the status log, a web-based interface makes the most sense. We can remotely interface to the system using the Ethernet port to connect it to the network (directly or via a WiFi radio) or add a PCI card with communications radio.
Clearly something like Apache is very overkill. A few years ago I designed a radio station controller using Karrigell, a Python-based application framework that includes its own web server. It is small, easy to understand and works great.
Much like the radio station design, the real-time task that controls the system can just read some saved parameters to know what to do and append log records to the log file. In order to prevent excessive updates to a single location in the flash storage, the log can be saved in RAM and then periodically flushed to flash.
That's it—a Linux-controlled sewage plant. While it may not exciting cocktail time conversation, it does seem like a good solution to a real-world problem. Now, we just need to password protect the status system so our geeks won't try flushing their toilet multiple times to see if the level in the holding tank changes.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Designing Electronics with Linux
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
4 hours 9 min ago
- BASH script to log IPs on public web server
8 hours 36 min ago
12 hours 12 min ago
- Reply to comment | Linux Journal
12 hours 45 min ago
- All the articles you talked
15 hours 8 min ago
- All the articles you talked
15 hours 11 min ago
- All the articles you talked
15 hours 13 min ago
19 hours 37 min ago
- Keeping track of IP address
21 hours 28 min ago
- Roll your own dynamic dns
1 day 2 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?