Chinavasion Pico Projector
As I extracted and tried to run various ARM-compiled Debian binaries on the projector, a common issue popped up—either my glibc was too new or too old. The glibc libraries are one of the core libraries for a Linux system that basically every C program needs to run, and the version this projector had seemed to fall somewhere between Debian Etch and Debian Lenny, with binaries from the former not executing and binaries from the latter complaining about a glibc that was a bit too old. It seemed like Lenny was the distribution closest to this one, but I honestly am not sure on what distribution or version this Linux install is based. The libc-2.3.3.so possibly seems to be from some ARM-compiled Fedora distribution. In any case, without glibc support, there was basically no way I could get most of my binaries to run, and I didn't want to risk bricking the device by overwriting the existing libraries, so I had to find a different approach.
My next plan was to track down a small Debian Lenny-based ARM distribution so I could copy the full root filesystem to the 400MB /mnt/mtd partition. Then, I could just chroot into that and run the commands I wanted. That way, all the commands would use that glibc, and I could add extra ARM-compiled Lenny packages. The problem I ran into fairly quickly was that, well, chroot segfaulted. Both the chroot binary that was included on the projector and the version in my Lenny install failed to work.
At this point, I started refreshing my memory on LD_LIBRARY_PATH, LD_PRELOAD and other variables I could use to tell a binary to use the version of the libraries under /mnt/mtd that I had installed. It turned out that got me a lot further. I could launch ls and a number of other console applications, including useful programs like strace and lsof; however, fbgetty was abandoned before Debian Lenny, so I had to try other framebuffer terminal applications in Lenny, such as jfbterm. The application would start, but it never seemed to be able to attach to the tty it wanted, and ultimately, it would error out. After trying a few things, including changing the permissions on /dev/console to be more permissive, I gave up for the moment and turned off the projector.
The next time I turned on the projector, the splash screen started with its scrolling progress bar, but after a minute or two, the progress bar seemed to freeze, and the system was unresponsive. Great, I bricked it. But how? After all, I had just added some files to a basically empty mountpoint and changed permissions on /dev/console. At least, that's all I could remember doing. At this point, I wasn't sure what to do. I didn't want to take apart the device (at least not yet), so I rebooted it a few times and tried to see if I could get any information. One thing I thought about was that the system might have needed all that space in /mnt/mtd that I had filled up or possibly all of my changing of VTs could somehow have been remembered (even though that seemed unlikely).
At this point, I remembered the init script that automatically upgraded the device, so I decided to make a USB key with a test script that would write a file back to the USB drive. If that worked, I could just continuously boot the device with the USB key and build up a script that could repair any damage I may have done. Unfortunately, when I tried this approach, I noticed that the device didn't seem to get very far into the boot process—the script never seemed to run.
Next, I decided to connect a USB keyboard and see if the system even loaded the kernel at all. It turned out that the keyboard did work, as the Caps-Lock key lit and Ctrl-Alt-Del at the right point did reboot the device. Unfortunately, when the system froze, so did the keyboard. At this point, I tried all sorts of keyboard combinations during boot to change the VT back and forth to attempt to Ctrl-C through some script that may have stalled, and through some sort of magic voodoo, whether it was something I hit on the keyboard, or as I've come to suspect, something on the hardware that started working again, finally the system fully booted.
After I breathed a sigh of relief, I realized I needed to make an immediate backup of the filesystem, so I had some record of what was there. I also decided to take off the cover to see if I could find a serial port on the hardware and access the boot prompt and boot messages. If you look at Figure 3, you can see the bottom of the motherboard on the device, and at the top of the picture is a small five-pin white connector labeled Program that I'm assuming is some sort of serial interface. Unfortunately, this connector is incredibly tiny, so I haven't been able to track down a compatible connector yet and test this theory. Figure 4 shows the top of the motherboard—what you would see if you simply unscrewed and lifted off the top of the projector. One thing I noticed when I looked on the board was extra soldering points for an extra USB port and VGA. More-seasoned hardware hackers might find even more interesting things on the board once they take a look.
Unfortunately, after I took it apart, I had the same trouble with a frozen progress bar at boot. After a number of reboots, I finally was somehow able to get it to boot completely, but since then, I've experienced the same issue a few other times. Eventually, the system will boot, but as of yet, I've been unable to track down the source of the problem. At this point, I'm leaning toward some sort of short on the device.
Kyle Rankin is a systems architect; and the author of DevOps Troubleshooting, The Official Ubuntu Server Book, Knoppix Hacks, Knoppix Pocket Reference, Linux Multimedia Hacks, and Ubuntu Hacks.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Roll your own dynamic dns
5 hours 1 min ago
- Please correct the URL for Salt Stack's web site
8 hours 13 min ago
- Android is Linux -- why no better inter-operation
10 hours 28 min ago
- Connecting Android device to desktop Linux via USB
10 hours 57 min ago
- Find new cell phone and tablet pc
11 hours 55 min ago
13 hours 24 min ago
- Automatically updating Guest Additions
14 hours 32 min ago
- I like your topic on android
15 hours 19 min ago
- This is the easiest tutorial
21 hours 54 min ago
- Ahh, the Koolaid.
1 day 3 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?