Java Class Reference Package
Publisher: Specialized Systems Consultants (SSC)
Reviewer: Dave Dittrich
Back in the mid 80s, when I first pulled a Unix workstation—an Intergraph InterPro, running System V Unix—out of a closet at Boeing and began to teach myself Unix system administration, I needed something to help me remember shell command options. The first reference I bought was the System V command reference from Specialized Systems Consultants. It sits in my desk drawer to this day, and still sees daylight every time I'm confronted with the question, “What option is it that I need to use with the foobar command on a System V system?” This reference card cost me about $5, if I remember correctly, which is about the same price it goes for today, ten years later.
That SSC has produced a Java class library reference isn't a big surprise. These references are the backbone of their publishing business, which now extends to Linux documentation, CD-ROM software collections, Linux and WWW magazines—even t-shirts! With the new “Internet runs on dog years” world that Java exists in, I seriously doubt if these cards will have the same shelf life as my System V reference, but System V isn't the same as it was back then, either. I'm sure many will find them equally useful just the same.
Before trying to assess these new Java reference cards, I think it's helpful to consider what these references are and are not, and just who is likely to use them.
These reference cards are not introductory overviews of the Java language, like the vast majority of Java books on the shelves today. They do not include tutorials or introductory text for each package, like O'Reilly & Associates' Java in a Nutshell. They are not API documentation, like The Java API (both volumes) from Addison-Wesley. I don't see them as being “competitors” with anything else out there right now (except, perhaps, the Java API hypertext pages themselves, which are a bit awkward to use sometimes). So what are they, and how well do they do their job?
The Java reference cards—one for the java.applet, java.awt, and java.util packages, and another for the java.lang, java.io, and java.net packages—are a concise, classified (no pun intended) listing of the methods and important constants associated with each class in these packages. No more and no less (well, at least not that much less).
Together, the two references cover 38 panels. Although not stated explicitly, they use a syntax somewhat similar to Unix man pages, where optional parameters are surrounded by square brackets. For example, rather than list two lines for each method signature, like this:
BufferedInputStream(InputStream dest);BufferedInputStream(InputStream dest, int buffersize);
they include just one line, like this:
BufferedInputStream(InputStream dest [,int buffersize]);
While this syntax is not exactly what you'll find in other books on Java, you get used to it quickly, and most people will probably appreciate the brevity it contributes to the listings.
Each package makes up its own section, with classes within the package in their own graphic box. This makes for a clear delineation between classes within each package. The box titles include only the class name. For example, just the title Class heads a box in the JAVA.LANG section (I'm not sure why they're YELLING), rather than being explicitly labeled java.lang.Class as the compiler will expect. In practical use it can be hard to determine exactly what you need to import to use this class in your code. When you are thirteen pages into the card and find the Color class, you have to backtrack page by page to find that Color is in the JAVA.AWT package and know to add import java.awt.Color; (or the more general import java.awt.*;) into your code. (This is a pretty minor gripe. If it really bothers you, you can always just write in the package name with a pen).
Carefully going through some of the class descriptions also brought out what appear to be a few errors of omission and one parameter mix up. For example, in the Component section of JAVA.AWT, missing methods are: checkImage(), getPeer(), location(), prepareImage(), size(), and toString(). The repaint() method has the maxWait parameter at the end of the list, when it should be at the beginning. Since most of these methods involve the complicated image producer/consumer mechanism, or are accessory functions more interesting to people programming layout managers than those just building a simple GUI, the omissions may not matter to the majority of Java coders. Missing from Label is addNotify(), but the 1.0.2 JDK API documentation itself says about this method, “Most applications do not call this method directly.”
More glaring is the lack of two entire packages, java.awt.image and java.awt.peer. Granted, these two packages are more interesting to people doing quite complicated graphics programming, or coding new AWT peers for window managers other than the already supported Windows, Macintosh and Motif, but they are still part of the JDK class library that a programmer may use. The image producer/consumer paradigm is quite confusing and is sometimes criticized as such in Java books, but if the programmer is forced to also have handy a copy of Java in a Nutshell to get the whole API picture, many will probably opt to just go with the book.
The author, Randy Chapman, is intimately familiar with the JDK through his work with the Linux port, and I know him from his working days in the Academic Computer Center at the University of Washington to be a very careful and thorough programmer. I am not sure if the omissions are due to working with an older API or if the idea was to simplify things for the average programmer or perhaps just the result of time pressures. (He is, after all, still a student with educational demands high on his priority list. I won't fault him for that.) Since it isn't stated explicitly, I will assume that the aim is to simplify the card and conclude that the target audience will be the beginning to intermediate programmer who sticks to coding “average” Java applications or applets and not the kind of Java fanatics who think triple tall espressos should be purchased in pairs. (These people would prefer to write Perl scripts to extract this information directly from the JDK source code tree and run the results through nroff!)
Overall, these cards are quite handy to have lying next to your keyboard and will prove to be well worth the small price that SSC charges for them. I wish more books had this high a usefulness-to-price ratio.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Non-Linux FOSS: libnotify, OS X Style
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Reply to comment | Linux Journal
18 min 51 sec ago
- Reply to comment | Linux Journal
4 hours 18 min ago
- Yeah, user namespaces are
5 hours 34 min ago
- Cari Uang
9 hours 6 min ago
- user namespaces
11 hours 59 min ago
12 hours 25 min ago
- One advantage with VMs
14 hours 54 min ago
- about info
15 hours 27 min ago
15 hours 28 min ago
15 hours 29 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?