The Falcon Programming Language in a Nutshell
In late 2003, I had the problem of making business-critical decisions and performing maintenance actions in real time, analyzing data that was passing through the servers I was charged with controlling. Data throughput was on the order of thousands of messages per second, each of which was made of complex structures and possibly nested maps, whose size was measured in kilobytes.
The applications in charge of those controls were already almost complete, and they were heavily multithreaded by design. The only thing missing was the logic-processing engine. That would have been the perfect job for a scripting language, but the memory, CPU, threading, responsiveness and safety constraints seemed to be a hard match.
After testing the available solutions, I decided to try to solve the problem by writing a scripting language from the ground up, taking into consideration those design constraints. After the decision was made to move forward, useful items commonly found missing from other scripting languages were added to the design specification. So, Falcon mainly was designed from the beginning to meet the following requirements:
Rapidly exchange (or use directly) complex data with C++.
Play nice with applications (especially with MT applications) and provide them with ways to control the script execution dynamically.
Provide several programming paradigms under the shroud of simple, common grammar.
Provide native multilanguage (UTF) support.
Provide a simple means to build script-driven applications, easily and efficiently integrated with third-party libraries.
As soon as I was able to script the applications that drove the initial development and meet these ambitious targets in terms of overall performance, I realized that Falcon may be something useful and interesting for others also, so I went open source.
The project is now reaching its final beta release phase, and Falcon has become both a standalone scripting language and a scripting engine that can drive even the most demanding applications.
The Falcon programming language now is included with many high-profile distributions, including Fedora, Ubuntu, Slackware, Gentoo and others. If your distribution doesn't include it yet, you can download it from www.falconpl.org, along with user and developer documentation.
Falcon currently is ported for Linux (32- and 64-bit), Win32 and Solaris (Intel). Older versions work on Mac OS X and FreeBSD. We are porting the newer version shortly, and a SPARC port also should be ready soon.
Falcon is an untyped language with EOL-separated statements and code structured into statement/end blocks. It supports integer math (64-bit) natively, including bit-field operators, floating-point math, string arrays, several types of dictionaries, lists and MemBuffers (shared memory areas), among other base types and system classes.
Morphologically, Falcon doesn't break established conventions, for example:
function sayHello() printl( "Hello world!") end // Main script: sayHello()
You can run this script by saving it in a test file and feeding it into Falcon via stdin, or by launching it like this:
$ falcon <scriptname.fal> [parameters]
We place great emphasis on the multiparadigm model. Falcon is based on an open coding approach that seamlessly merges procedural, object-oriented, functional and message-oriented programming. We're also adding tabular programming, sort of a multilayer OOP, but we don't have the space to discuss that here. Each paradigm we support is generally a bit “personalized” to allow for more comfortable programming and easier mingling with other paradigms.
Falcon procedural programming is based on function declaration and variable parameters calls. For example:
function checkParameters( first, second, third ) > "------ checkParameters -------" // ">" at line start is a short for printl if first > "First parameter: ", first end // ... and single line statements // can be shortened with ":" if second: > "Second parameter: ", second if third: > "Third parameter: ", third > "------------------------------" end // Main script: checkParameters( "a" ) checkParameters( "b", 10 ) checkParameters( "c", 5.2, 0xFF )
You can use RTL functions to retrieve the actual parameters passed to functions (or methods). Values also can be passed by reference (or alias), and functions can have static blocks and variables:
function changer( param ) // a static initialization block static > "Changer initialized." c = 0 end c++ param = "changed " + c.toString() + " times." end // Main script: param = "original" changer( param ) > param // will be still original changer( $param ) // "$" extracts a reference > param // will be changed p = $param // taking an alias... changer( $param ) // and sending it > p // still referring "param"
Again, RTL functions can be used to determine whether a parameter was passed directly or by reference.
The strict directive forces the variables to be declared explicitly via the def keyword:
directive strict=on def alpha = 10 // we really meant to declare alpha test( alpha ) // call before declaration is allowed function test( val ) local = val * 2 // error: not declared with def! return local end
Falcon has a powerful statement to traverse and modify sequences. The following example prints and modifies the values in a dictionary:
dict = [ "alpha" => 1, "beta" => 2, "gamma" => 3, "delta" => 4, "fi" => 5 ] for key, value in dict // Before first, ">>" is a short for "print" forfirst: >> "The dictionary is: " // String expansion operator "@" >> @ "$key=$value" .= "touched" formiddle: >> ", " forlast: > "." end // see what's in the dictionary now: inspect( dictionary )
Notice the string expansion operator in the above code. Falcon provides string expansion via naming variables and expressions and applying an explicit @ unary operator. String expansions can contain format specifiers, like @ "$(varname:r5)", which right-justifies in five spaces, but a Format class also is provided to cache and use repeated formats.
Both user-defined collections and language sequences provide iterators that can be used to access the list traditionally. Functional operators such as map, filter and reduce also are provided.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Designing Electronics with Linux
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
9 hours 51 min ago
- BASH script to log IPs on public web server
14 hours 18 min ago
17 hours 54 min ago
- Reply to comment | Linux Journal
18 hours 27 min ago
- All the articles you talked
20 hours 50 min ago
- All the articles you talked
20 hours 53 min ago
- All the articles you talked
20 hours 55 min ago
1 day 1 hour ago
- Keeping track of IP address
1 day 3 hours ago
- Roll your own dynamic dns
1 day 8 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?