Selecting I/O Units

Selecting and communicating with I/O units. The I/O unit physical layer interface. Understanding the wide perspective.
Analog Signal Conditioning

With analog sensors, the signal the sensor sends may not be adequate for the I/O unit's circuitry. Conversely, a signal that emanates from an I/O unit output may not be robust enough for the effector. In these cases, an analog circuit that receives a weak or limited signal and conditions it to the appropriate range or robustness may be needed. This modification or improvement of the original signal is called signal conditioning.

If this sounds complex, it is. Fortunately, most I/O units incorporate signal conditioning internally to cover most of the common sensor requirements. For those sensors that have very difficult signal conditioning requirements, such as strain gages and accelerometers, or sensors with additional requirements such as excitation (a power source), separate signal-conditioning units are available to convert these signals. Occasionally, a custom signal-conditioning circuit may be necessary, and high-accuracy, integrated analog circuits make it possible to create one. While the detailed function of the circuit is complex, the integrated analog circuit tremendously simplifies the design. With an analog-integrated chip, a few other components and a power supply, a simple signal-conditioning circuit can be built inexpensively.

Converting Analog Signals for the Computer

I was unable to find a tag for this.

Computers understand only numbers. They don't understand continuous analog electrical signals. So at the I/O unit, the conditioned analog input signals are digitized into a number so the computer will understand them. If the signal is an output, the I/O unit converts a number from the computer into a voltage or a current. The number that is received from an analog-to-digital conversion or sent to a digital-to-analog converter is called counts. Counts represent the proportional value of the analog signal in a digital form. Counts typically range from zero to 2n - 1 (although there are modified implementations of this), where n is the number of bits of the digital-analog converter.

Many I/O units also support engineering units (physical quantities) in floating-point formats. For example, most of us don't think of room temperature as 100 counts but as 78ºF (25ºC). The I/O unit will graciously perform the engineering unit-to-counts conversion. Temperature conversion can be very complex due to the nonlinear nature of the sensor. Other conversions may not be difficult, just tedious to calculate (I avoid tedious calculations and let the I/O unit do them).

Analog Accuracy

In this age of digital multimedia, we are fairly familiar with the term bits when referring to the resolution of analog-to-digital conversion. Typically analog-to-digital converters in instrumentation systems range from 12 to 18 bits of representation (4,096 to 262,144 counts). The bits dictate how many counts or discrete levels the analog signal may be described in. The number of discrete levels implies resolution of the digitized analog signal.

Changing some number schemes, I'll refer to percents of scale instead of counts. If I think of 0% of counts as zero counts and 100% of counts as the maximum count my analog-digital converter represents, I can start talking about signal magnitudes again but in the quantified discrete digital world.

Precision is affected by resolution. I think of precision as the magnitude of accuracy per count. It turns out that mathematically, it's the inverse of the counts times 100%. In the case of a 12-bit converter, each count represents 0.024% of full scale. For the 18-bit converter, each count is 0.00038% of full scale. I think of precision as describing how small the steps are that the count value represents, which means that the higher the number of bits of resolution, the closer the count is to the physical analog value (assuming that there are no errors in this conversion).

The number of steps, or the size of the steps the converter represents, is meaningless unless we know the accuracy of the converter. Accuracy is how much error the converter may report, typically calculated as the inverse of the resolution multiplied by 100%. Sometimes this is reported as counts or as percents of full scale. Linearity, which is the error due to limitations of the data converter, also may be specified for the converter. Linearity varies over the range of input. For example, it may be 0.1% at 10% of input and 0.2% at 50% of input and 0.02% at 100% of input. Clearly, the error doesn't scale with the input. Specifications may show linearity as a tolerance specification, a graph or an algebraic constant of a mathematical equation.

With high resolution and very linear analog-integrated circuits, accuracy and linearity are very high. It's been quite some time since I've had to consider how error-prone my converter is. These days, measurement errors generally occur when the accuracy of the analog/digital circuitry exceeds the accuracy of the sensor or effector.


Geek Guide
The DevOps Toolbox

Tools and Technologies for Scale and Reliability
by Linux Journal Editor Bill Childers

Get your free copy today

Sponsored by IBM

8 Signs You're Beyond Cron

Scheduling Crontabs With an Enterprise Scheduler
On Demand
Moderated by Linux Journal Contributor Mike Diehl

Sign up now

Sponsored by Skybot