Lighting Simulation with Radiance
When I wanted to design a log home on my computer to see what it would look like under actual lighting conditions, I tried AutoCAD, 3D Studio Max and numerous off-the-shelf home design packages. None of them provided the realistic output or easy support for dealing with the log walls I desired. I had been playing with a lighting simulation package from the Lawrence Berkeley National Laboratory (LBL) known as Radiance and decided I could get what I wanted much faster by adding utilities to it.
Radiance is a physical lighting simulation system written primarily by Greg Ward Larson. It has been around since the early 1990s and recently changed licensing from a free-for-noncommercial-use license to the open-source model. The package produces great-looking images that are output in a special format that records both the texture and physical lighting of a scene, much like the professional products LightScape and VIZ 4 by Autodesk.
The packages used for movie and game making are really the graphics equivalent of junk food factories. The end result may be attractive and popular, but it isn't substantial. The physical details of lighting simply aren't as important as speed to movie and game makers, because they have a lot of pixels to push. A two-hour movie has 172,800 individual frames, and games have to run in real time. As a result, light becomes an artifact of an artistic algorithm in most graphics systems and has little basis in reality.
Radiance output is considered a lab-quality simulation of the physics of light (as long as your input is realistic) and has been rigorously tested in the professional world.
You can obtain the Radiance source code from radsite.lbl.gov. I recommend getting the source tarball, as the compiled RPMs do not include any of the auxiliary files. Once you have the tarball:
$ tar xzf rad3R4.tar.gz $ cd ray $ ./makeall install
Then, simply answer the questions about where you want to put the software. I use $HOME/radiance/bin for the binaries and $HOME/radiance/lib for the auxiliaries.
The makeall script doesn't install the sample scenes or the documentation, so you have to move those files to a good spot also. For example:
$ mv doc/man $HOME/radiance $ mv obj $HOME/radiance
Be sure to add these things to the MANPATH and PATH variables in your profile. One caveat: there is an important utility called rview in the package. Unfortunately, Vim also has a utility of the same name, so use a PATH modification or rename Vim's rview. Do not rename the Radiance utility, because it is called indirectly by other Radiance utilities.
New users of Radiance first will notice the lack of an included CAD system for generating the scene description. The package was written for research purposes under UNIX in the early 1990s, and if you look at the file formats, it is obvious they were written for command-line junkies like myself who love the power of pipes and plain-text processing (my own initials are AWK, after all).
Nevertheless, there are utilities for translating geometry from formats like DXF, Wavefront and MGF so you can use any utility that will output such a format. Many of the modelers listed in the application archive of linux.org will output one of these. A Windows-based AutoCAD/Radiance module called Desktop Radiance is also available from the Radiance web site if you happen to own a compatible version of AutoCAD.
The input files of Radiance are human readable, which makes them good candidates for script generation. However, be warned: occasional terms in the documentation will cause accelerated heart rates in passing physicists, such as “watt per square meter per steradian”. Be sure to check out all the documentation on the web site. If you decide to do more than play, you might want to track down a copy of Rendering With Radiance by Greg Ward Larson, et al. It is currently out of print, so check with used book dealers.
Listing 1 is a scene that includes sky and ground, the material for brass and a sphere with the brass material applied. The sky and ground are standard. The only thing you need to edit for your own scenes are the options to gensky. The values in the listing correspond to noon on November 25 at 33° latitude north and 80° longitude west. Use negative numbers for south and east.
Each item in the scene description has the same format. The first line declares an existing material that will be applied to the entry (or void if that doesn't apply), a type name for a material or geometric primitive (like sphere, polygon, plastic or metal) and a user-defined name. The next three groups are the string, integer and real (floating point) parameters for the entry. Each of these starts with an argument count, followed by the actual arguments. They can be spread over as many lines as necessary.
Most entries have only real parameters. This explains the two zeros in the middle of most of the entries; they have no string or integer parameters. The 5 in the last line of brass indicates five real parameters, and the 4 in the last line of the sphere indicates four real parameters. The parameters are straightforward. For example, a sphere needs a center (x, y, z) and a radius.
Materials can be the hardest part of a scene. It is easiest to start by copying existing materials and modifying them to your needs. Read refman.pdf from the web site for more details.
The gensky line at the top of Listing 1 is an embedded command-line utility. Placing an exclamation point at the beginning of a line in a Radiance scene tells the system to run the line as a shell command and use the output as part of the scene. Radiance comes with a number of these utilities, and I've found that writing your own can make scene generation quick and easy.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- New Products
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Validate an E-Mail Address with PHP, the Right Way
- Tech Tip: Really Simple HTTP Server with Python
- Why Python?
- Build a Skype Server for Your Home Phone System
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Reply to comment | Linux Journal
45 min 21 sec ago
- Reply to comment | Linux Journal
1 hour 35 min ago
- Not free anymore
5 hours 37 min ago
9 hours 24 min ago
- Reply to comment | Linux Journal
9 hours 32 min ago
- Understanding the Linux Kernel
11 hours 47 min ago
14 hours 17 min ago
- Kernel Problem
1 day 19 min ago
- BASH script to log IPs on public web server
1 day 4 hours ago
1 day 8 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?