Now that we have deployed our Calculator EJB, let's write a short Java program that uses it. Listing 4 contains the source code for such a class, UseCalculator.java.
While our program is completely independent from our EJB classes and can be compiled and run separately (or even on a separate computer), we use Ant to keep track of the CLASSPATH (which must include the JBoss classes, as well as those from our .jar file), compile our code and then run it. In order to run our application, we simply can say
This runs our program after ensuring that our EJB is compiled, turned into a .jar file and deployed.
Anything that UseCalculator.main() writes to System.out (also known as the stdout filehandle) is printed on the screen when we run Ant. However, anything that our CalculatorBean method writes to stdout is printed to the JBoss logging output. By keeping JBoss open in one terminal window and running Ant in another, we can see them communicate with each other.
UseCalculator's main() method consists of several standard steps for connecting to and using our EJB. We first connect to JNDI, which keeps track of the objects currently deployed to JBoss. This connection is known as a context. Our program looks for jndi.properties, a short Java properties file that tells it where it can go to find a context (this file should be placed in $CALCULATOR/resources/, as specified in build.xml). This file is in Java resources format, where every line contains name=value:
java.naming.factory.initial= java.naming.provider.url=localhost:1099 java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces
Once we have our context, we look up our object using the name that we gave it in jboss.xml, which is inside of our ejb-jar.xml. Without jboss.xml, JBoss will not associate the right name with our EJB, making it impossible to find using JNDI.
JNDI returns an object reference, which we then cast into an instance of CalculatorHome, which is then used to create an instance of Calculator. Notice how we create an instance of Calculator (the remote interface), rather than one of CalculatorBean. The remote interface provides us with a transparent connection to an instance of CalculatorBean on the server, wherever that might be. At no time do we actually know where the real instance of CalculatorBean resides.
Finally, we invoke one of the methods that has been defined in Calculator (the remote interface). Our method invocation is passed along to CalculatorBean (the bean class), where it executes (and prints out some logging information) and returns (where we print the result to stdout).
This month we started to look at Enterprise JavaBeans, an infrastructure for creating distributed applications using Java. While EJB is far more complex than SOAP, XML-RPC or other distributed object systems, it is also designed to handle more complicated tasks. (For example, SOAP doesn't attempt to handle transactions; that's left to the application layer to implement.)
At the same time, working with Java often means spending more time on administrative and logistical issues, rather than on programming. Determining which file must be in which directory can often be frustrating, especially if you are used to working with a more dynamic language such as Perl or Python. Nevertheless, the pain quickly subsides when you see how easily you can create distributed applications with EJB. The fact that JBoss is so easy to download, install and run, and has a very small memory footprint, makes it simple for newcomers to try EJB.
Next month, we will continue working with EJB, looking at the heart of EJB, the entity beans that provide an object interface to our relational databases.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- RSS Feeds
- Validate an E-Mail Address with PHP, the Right Way
- Readers' Choice Awards
- Tech Tip: Really Simple HTTP Server with Python
28 min 52 sec ago
- Reply to comment | Linux Journal
1 hour 1 min ago
- All the articles you talked
3 hours 24 min ago
- All the articles you talked
3 hours 27 min ago
- All the articles you talked
3 hours 29 min ago
7 hours 54 min ago
- Keeping track of IP address
9 hours 45 min ago
- Roll your own dynamic dns
14 hours 58 min ago
- Please correct the URL for Salt Stack's web site
18 hours 9 min ago
- Android is Linux -- why no better inter-operation
20 hours 25 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?