Developing for the Atmel AVR Microcontroller on Linux
Program space is a contiguous block of Flash memory, 16-bits wide that can be erased/rewritten 10,000 times. You can design your circuit to allow firmware upgrades in-circuit, using in-system programming.
All AVRs have some EEPROM, and most have SRAM; both are 8-bits wide. The EEPROM is designed to withstand at least 100,000 erase/write cycles. EEPROM is useful because it can be written from within your embedded program to retain data, even without a power supply, or during programming, such as for production-line calibration.
All AVRs, from the tiny 8-pin DIPs to the 44-pin Megas, have at least one data port. Data ports allow for input or output of logic-level data. The AVR ports are bidirectional, allowing you to set them for input or output on a pin-by-pin basis.
Many of the AVRs include additional hardware peripherals, such as UARTs for serial communication and calibrated RC oscillators used as internal system clocks. The external pins often serve two or more purposes, and how they are used depends on how you've configured the microcontroller. For instance, Figure 1 shows that certain I/O lines from both ports can be used with the multiplexed A/D converter.
The set of tools described here isn't the only one available, but it allows you to do basically anything, and the tools function well together. The toolkit is comprised of Binutils, GCC, AVR Libc and our Makefile template to write and build programs for the AVR microcontrollers; GDB and simulavr to debug your software; and avrdude as well as a hardware programmer to transfer your software to the microcontrollers. See the on-line Resources for download URLs for all software.
Fortunately, the recent versions of all these tools include support for the AVR platform, so installation is straightforward. We assume you've chosen to install everything under /usr/local/AVR.
Download a fresh copy of the current binutils source by following the link in the Resources. Untar the source, move into the binutils-X.XX directory and run:
$ ./configure --prefix=/usr/local/AVR --target=avr $ make # make install
The /usr/local/AVR/bin directory now contains AVR versions of ld, as, ar and the other binutils executables. Add the /usr/local/AVR/bin directory to your PATH now. You can apply the modification system-wide by adding:
to the /etc/profile file. Make sure the directory is in your PATH and that the change has taken effect before proceeding.
After retrieving a recent release of the Gnu Compiler Collection from a mirror, run the following commands from within the unpacked top-level source directory:
$ ./configure --prefix=/usr/local/AVR \ --target=avr --enable-languages="c,c++" \ --disable-nls $ make # make install
This builds C and C++ compilers for AVR targets and installs avr-gcc and avr-g++ in /usr/local/AVR/bin.
The AVR Libc package provides a subset of the standard C library for AVR microcontrollers, including math, I/O and string processing utilities. It also takes care of basic AVR startup procedures, such as initializing the interrupt vector table, stack pointer and so forth. To install, get the latest release of the library and run the following from the top-level source directory:
$ unset CC $ PREFIX=/usr/local/AVR ./doconf $ ./domake # ./domake install
The Psychogenic team has created a standard Makefile template that simplifies AVR project management. You can customize it easily for all your assembly, C and C++ AVR projects. It provides everything for a host of make targets, from compilation and upload to the microcontroller to debugging aids, such as source code intermixed with disassembly, and helpful gdbinit files. A detailed discussion of the template is available, and the Makefile template is available as Listing 1 on the Linux Journal FTP site (see Resources). Store the template with the other AVR tools, moving it to /usr/local/AVR/Makefile.tpl.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- The Secret Password Is...
- New Products
3 hours 31 min ago
- Keeping track of IP address
5 hours 22 min ago
- Roll your own dynamic dns
10 hours 35 min ago
- Please correct the URL for Salt Stack's web site
13 hours 46 min ago
- Android is Linux -- why no better inter-operation
16 hours 2 min ago
- Connecting Android device to desktop Linux via USB
16 hours 30 min ago
- Find new cell phone and tablet pc
17 hours 28 min ago
18 hours 57 min ago
- Automatically updating Guest Additions
20 hours 6 min ago
- I like your topic on android
20 hours 52 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?