Smart Cards and Biometrics: Your Key to PKI
For centuries, security was synonymous with secrecy. The shared secret—conducting business between two parties who each knew the code—was a worldwide approach. Even in this age of electronics and supercomputers, passwords and PINs are shared between you and the computer or ATM machine to which you want access. But secret passwords require a great deal of trust between parties sharing the secret. Can you always trust the administrator or other users of the machine you are accessing? Most computer break-ins today are due to compromise by system users or by a hacker who uses a legitimate account (possibly yours) to gain access to general security—sometimes even gaining superuser access. This traditional paradigm of shared-secret computer security could soon be a thing of the past with smart-card-based cryptographic credentials and biometric authentication for access control.
Some individuals and companies are replacing shared secret security (also called symmetric security) with the Public Key Infrastructure (PKI) approach. PKI uses a standardized set of transactions using asymmetric public key cryptography, a more secure and potentially much more functional mechanism for access to digital resources. The same system could also be used for securing physical access to controlled environments, such as your home or office.
In a PKI world, everyone would be issued at least one cryptographic key pair. Each key pair would consist of a secret (private) cryptographic key and a public cryptographic key. These keys are typically a 1024-bit or 2048-bit string of binary digits with a unique property: when one is used with an encoding algorithm to encrypt data, the other can be used with the same algorithm to decrypt the data. The encoding key cannot be used for decoding. Public keys are certified by a responsible party such as a notary public, passport office, government agency or trusted third party. The public key is widely distributed, often through a directory or database that can be searched by the public. But the private key remains a tightly guarded secret by the owner. Between sender and receiver, secure messaging (or other secure transaction) would work as described below.
For the sender (Figure 1), the following steps occur:
Message data is hashed; that is, a variable-length input string is converted to a fixed-length output string. Hash functions are mainly used with public key algorithms to create digital signatures.
A symmetric key is created and used to encrypt the entire message. DES and IDEA are examples of symmetric key cryptography.
The symmetric key is encrypted with the receiver's asymmetric public key.
The message hash is encrypted with the sender's asymmetric private key, creating a digital signature independent of the encrypted message.
The encrypted message, encrypted symmetric key and signed message hash are sent to the receiver.
For the receiver (Figure 2), these steps occur:
The encrypted symmetric key is decrypted using the receiver's asymmetric private key.
The symmetric key is then used to decrypt the message body.
The encrypted hash is decrypted with the sender's asymmetric public key.
The decrypted message is then rehashed with the original hashing algorithm.
The two hashes are compared to verify the sender's identity and serves as proof that the message was not altered in transit.
Several technical issues must be solved before this scenario can be realized:
Secure key storage
Secure authentication to the key store
Directory services (central public key database)
Key escrow or other emergency recovery method for encrypted data
Cross-platform standards (Microsoft PC/SC, Netscape SMIME, Intel CDSA, IBM OSF, etc.)
International export and usage regulations for strong cryptography
If these issues seem daunting, remember how impossible a common network strategy once seemed. Today, the Internet is everywhere and connected to almost every type of computer. Let's consider the PKI issues one by one.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide