Creating OpenACS Packages
Now that our data model is installed, it's time to write an application that uses it. OpenACS 4 introduced a new templating system that builds upon ADP (the AOLserver equivalent of ASP or JSP), which I have found to be one of the best parts of OpenACS.
ASP-like pages are easier for nonprogrammers to understand than standard server-side programs. However, hybrids of HTML and programs tend to become bottlenecks, because the designer and developer cannot work on the file simultaneously.
OpenACS templates are a refreshing solution to this problem. We divide the page into two parts: one (the .tcl page) for the programmer and the other (the .adp page) for the designer. The .tcl page begins with a contract describing which values it expects to receive, as well as which values it will provide to the ADP page. The Tcl page ends with a call to ad_return_template, which looks for an .adp page of the same name, substitutes variables appropriately and then runs the ADP parser on the page.
The Tcl page can pass values to the ADP page in the form of data sources, a fancy term for variable. If the Tcl page says
set five 5
the ADP page can retrieve this value anywhere by surrounding the variable name with @ signs, as in @five@. If the variable five has not been defined or exported in the page contract, OpenACS produces a runtime error, complaining that no such variable exists.
This splitting of responsibilities means that the designer and programmer can work independently, so long as the agreed-to interface described in the page contract (for Tcl page inputs and outputs) remains intact.
OpenACS comes with a large number of functions designed to help programmers create HTML forms quickly and easily. For the purposes of this introduction, we won't use these functions; we will use simple, raw HTML instead.
Our application will consist of two URLs:
One will display the current entries in our atf_hello_postings table, in chronological order, along with a form for entering a new posting. We will use the OpenACS database API, which is easier to use than the native AOLserver database API (and frees us from having to worry about threads and database pooling), to retrieve results from the database and stick them into a multirow variable. This variable then will be passed as a data source to the ADP page, where it will be displayed. The action for this form will point to a second URL.
The other is the Tcl program that receives the HTML form, enters a new row into the database and redirects people back to the first page.
In other words, we will need two Tcl pages and one ADP page for our application. These will all go in the www subdirectory under atf-hello; if that directory doesn't exist, create it, double-checking that it is owned by the same user as AOLserver is running.
The first Tcl page (posting.tcl), shown in Listing 1, expects no parameters and exports a single data source named postings. The Tcl page begins with a call to ad_page_contract, which allows us to comment on the owner and purpose of the file (in the first parameter), the inputs we receive (in the second parameter, blank posting.tcl) and the data sources we export (in the third parameter, named properties for historical reasons). The Tcl page ends with a call to ad_return_template, which looks for an .adp file with the same name as the current .tcl file.
While each data source is really a simple Tcl variable, the OpenACS templating system disguises the true nature of these variables somewhat, assigning each data source a name and a type (multirow, list, onevalue or onerow). In this particular case, our postings data source is a multirow, which means it will contain multiple result rows from our SELECT. The db_multirow procedure takes three arguments: the name of the variable into which the rows should be read, the name of the query and the SQL itself.
Named queries are both a blessing and a curse in OpenACS. They are a blessing, in that they make it possible to work with multiple databases by placing the query in an external .xql file (XML-formatted files appropriate for particular databases). The problem is that OpenACS first looks for an .xql file associated with the query name, only looking at the SQL in your .tcl page if no XML exists. Many beginning OpenACS programmers are surprised to find that the system is ignoring their SQL modifications in the .tcl page, looking instead at the .xql page.
When posting.tcl ends by invoking ad_return_template, the OpenACS templating system looks for posting.adp. Because our Tcl page contained all of the Tcl program code, we don't need the standard <% %> tags in our ADP page. However, we do need a way to translate our data source into something usable within HTML.
Shown in Listing 2 (posting.adp), the OpenACS templating system adds a few tags that make it possible to retrieve the values set in the .tcl page in a straightforward way:
We can include HTML conditionally in the output page by using an <if> tag, which compares (using eq and ne) two values. In posting.adp, we compare the number of rows in the postings data source (by querying @postings:rowcount@) with 0. If there are no rows to display, then we show nothing at all.
If there are rows to display, we iterate through them with the <multiple> tag. Within the <multiple> tag block, we can retrieve individual database columns with the @NAME.column@ syntax, as you can see in posting.adp.
Regardless of how many rows we display, posting.adp always includes a small HTML form that sends its contents to the posting-add program. This program, posting-add.tcl (shown in Listing 3), starts with an ad_page_contract that declares a single input variable (posting_text). Input variables are mandatory by default, although we can declare them as optional or assign a default value by modifying the entry in ad_page_contract. In this particular case, we ask OpenACS to remove any leading or trailing whitespace from the text input we receive.
We then retrieve the current user ID using the built-in ad_get_user_id procedure, assigning that to the user_id variable. Next, we use db_dml to insert the posting into the database. Notice how we use colons (rather than dollar signs) before the variable names in db_dml; this is standard in the OpenACS database API and ensures that we will not encounter quoting problems when passing data to the database server.
Finally, posting-add.tcl ends by redirecting the user to posting, which invokes posting.tcl and displays posting.adp.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- I once had a better way I
1 hour 40 min ago
- Not only you I too assumed
1 hour 58 min ago
- another very interesting
3 hours 51 min ago
- Reply to comment | Linux Journal
5 hours 44 min ago
- Reply to comment | Linux Journal
12 hours 38 min ago
- Reply to comment | Linux Journal
12 hours 55 min ago
- Favorite (and easily brute-forced) pw's
14 hours 46 min ago
- Have you tried Boxen? It's a
20 hours 38 min ago
- seo services in india
1 day 1 hour ago
- For KDE install kio-mtp
1 day 1 hour ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?