PIC Programming with Linux
The vast majority of the computers in the world do not run Windows. While this is good news for Linux enthusiasts, the bad news is that they don't run Linux or much of any kind of operating system at all. These are the computers running your televisions, VCRs, cell phones, pagers and marine radios. They go by cryptic names like MC68HC05, 87C51, or PIC16C84 and are manufactured by companies like Motorola, Philips and Microchip.
Microcontrollers are the workhorse computers of the world. They do the repetitive tasks that require little or no human intervention and most of them will not even blink when the “millennium” bug hits their larger, faster cousins. They power up, do their job and power down again using very little power and defiantly not requiring a heat sink and a fan.
One of these little wonders is the PIC16C84 from Microchip. This is an 18-pin processor with 1KB of electrically erasable/programmable read-only memory (EEPROM), 36 bytes of SRAM and 13 input/output lines; it can operate at speeds ranging from DC (0Hz) to 10MHz.
The PIC16C84 is an excellent introduction to embedded processors and assembly language. The RISC instruction set has only 35 commands (op-codes) to learn, and the cost is under $8 for one. You can build a PIC programmer for under $20 in parts, or you can buy one pre-built and pre-tested over the Internet. Prototype boards are also available that need only a processor; they already have the clock crystal and programmer header, as well as a small prototyping area for adding to the circuits (usually a couple of LEDs for your first project).
This low cost for development doesn't mean that the PIC cannot be used for serious work. Several of my projects include an interface between the PC and the Dallas Semiconductor 1-wire bus, and a wired remote control that uses the Sony Control-L protocol to control a camcorder. In the most recent Circuit Cellar Ink contest, one of the winners implemented the PPP and TFTP protocol using an 8-pin PIC12C672.
Because of the ease of designing and building a PIC programmer to attach to a parallel port, dozens of designs are available, all using different pins on the parallel port. Some use inverters on all the control lines, and others use inverters on only some of the lines. My program picprg can handle all of these, as long as they use the standard five control lines. With all of these variations, the software to drive the programmer needs to be easily configurable.
Another feature of these devices is the ability to design a programming header into the circuit so that the processor can be programmed without removing it from the device it is attached to. This facilitates software work in the field, allowing technicians to easily service and upgrade the software.
When I first started using the PIC16C84, a compiler was already available for Linux, but no Linux software ran the HOPCO programmer that I use. An easy way to solve this problem would have been to get the DOS software included with the programmer to run under DOSEMU. Since I never seem to pick the easy way, I decided to write a native Linux PIC programmer. I decided on a full-screen ncurses interface, which would run on a VT console or an xterm as long as the TERM environment variable is set to xterm-color.
My picprg program allows you to program the PIC microcontroller, read previously programmed PICs, verify a PIC against the program in memory, and view the program in hexadecimal. It also features a versatile configuration screen, which makes it a snap to use with the wide variety of PIC programmers available.
Compiling picprg is easy: you just type make in the source directory and a binary called picprg is generated. The only dependency for picprg that may cause problems is the ncurses library. You must have v1.9.9e or later installed for it to work. All of the Linux distributions that I know of include ncurses by default, so you should be set. If you want to install it as suid root in /usr/local/bin, then type make install; otherwise, you will have to move it to your preferred final location.
picprg must be run as root, since it requires low-level access to the /dev/lp device that isn't available to normal users, even with write access enabled. You can either run it as root or install it as suid root, so that it can run as the root user. Remember that any program running suid root is a potential security risk.
The first time picprg is started, you must pass it the number of the printer port (/dev/lpX) to which you have attached the programmer. I have my modified HOPCO programmer attached to /dev/lp2, so I run picprg -p2 to start it for the first time. You will see a nice blue screen (I'm still addicted to the color scheme of my Atari 800) as shown in Figure 1.
The main menu is self-explanatory. Pick option C to get the configuration menu. Use the arrow keys to navigate the list of configuration options, and a short help message will be displayed for each selection.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Reply to comment | Linux Journal
3 hours 4 min ago
- Yeah, user namespaces are
4 hours 20 min ago
- Cari Uang
7 hours 52 min ago
- user namespaces
10 hours 45 min ago
11 hours 11 min ago
- One advantage with VMs
13 hours 39 min ago
- about info
14 hours 13 min ago
14 hours 14 min ago
14 hours 14 min ago
14 hours 17 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?