Using a Cryptographic Hardware Token with Linux: the OpenSSL Project's New Engin
The engine implementation dynamically links with a hardware token's library at run time. The user specifies the engine type on the command line, and this loads the necessary library from a structure and links the needed token-specific functions. The "-engine" option to OpenSSL allows the end-user to specify the type of token to use. Under Linux, dynamic linking with a shared-object library happens with the dlopen() call. Unfortunately, the version of the Red Hat6.2 Linux libraries that Chrysalis-ITS provided us with had a minor flaw. One symbol was missing, something leftover from the Solaris libraries (from which the Linux implementation was presumably cloned). As soon as the dlopen() call returned, it indicated an error. The missing symbol, _IO, set me back a few days in head scratching as I tried to find the miscreant library that I was missing in the linking phase. After a few go rounds with the able Chrysalis-ITS customer support staff, I was back in action with a patched library. Its a testament to good customer support when they provide you with a patched library within 24 hours! Furthermore, we got permission to provide the include (cryptoki.h) files from the Luna2 toolkit back to the OpenSSL project.
With the new library in place, OpenSSL performed flawlessly with the engine code for building certificate requests. Unfortunately not all of the OpenSSL commands were implemented for use with the engine. In particular, I needed to sign e-mails using the OpenSSL smime command. After looking over the source code for the OpenSSL "req" implementation, I was able to port the command-line option (-keyform) necessary to get the smime command working within a few hours.
One tricky part of the implementation was deciding how engine-specific issues would be handled. In particular, a user needs to specify which key on the token should be used for signing and provide the pin number for access to a specific token. Because the token also fits into a PCMCIA slot, you have to indicate to the program the slot in which the token is placed. These issues were easily solved by the "-key" argument provided with the smime command. Normally the -key argument points to a file, but in the case of an engine, this is superfluous since the key is on the token. The Luna2, however,allows a number of keys to be placed on the token, so specifying the key that we wanted to use became necessary. Therefore, I used the -key argument to pass the name of the key, the pin number and the slot of the token to be used.
I wrote a simple token initialization program that loaded a DSA key-pair onto the token, and then we were ready to begin. The first thing we needed to perform S/MIME signing, besides the key-pair, was a certificate, and the way to get that was to build a certificate request. This was easy with the new OpenSSL Luna2 engine implementation:
openssl req -engine luna2 -keyform engine -text -key DSA-public:1:1234 -config config_info -out cr.pem
The -key argument holds the name of the key, the PCMCIA slot the key is in and the user's pin number for the token.
The config_info file held the following information needed for the certificate request:
[ req ] distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] C = US ST = New York L = New Paltz O = ISTI OU = HQ CN = Paul Friberg emailAddress = email@example.com
Once I received a certificate back from the client's certificate authority, I was ready to sign e-mails using S/MIME. To do that with this implementation, I simply issued the following complex command line, using the file email.txt as input.
openssl smime -sign -engine luna2 -in email.txt -out signed.email.txt -signer certficate.pem -keyform engine -inkey DSA-Public:1:1234
The output of the command is a file (signed.email.txt) that has a S/MIME clear signed content, with my certificate attached. The certificate is attached to give the receiver some assurance that I am who I claim to be. That is, if they believe the CA who issued it.
With this task finished we delivered a working solution to our client, complete with source code. Having the source code to implement and authors willing to talk with you significantly speeds development. Furthermore, having a customer support team that understands and supports Linux is a tremendous plus.
Paul Friberg is the Vice President of Instrumental Software Technologies, Inc., a software development firm specializing in custom solutions under UNIX and Linux. When not programming, he can often be found clinging to the side of a rock (not under it) or a frozen waterfall. He can be reached at firstname.lastname@example.org.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- The Secret Password Is...
- New Products
3 hours 36 min ago
- Keeping track of IP address
5 hours 27 min ago
- Roll your own dynamic dns
10 hours 40 min ago
- Please correct the URL for Salt Stack's web site
13 hours 52 min ago
- Android is Linux -- why no better inter-operation
16 hours 7 min ago
- Connecting Android device to desktop Linux via USB
16 hours 36 min ago
- Find new cell phone and tablet pc
17 hours 34 min ago
19 hours 3 min ago
- Automatically updating Guest Additions
20 hours 11 min ago
- I like your topic on android
20 hours 58 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?